danny0405 commented on a change in pull request #12632:
URL: https://github.com/apache/flink/pull/12632#discussion_r440132608



##########
File path: docs/dev/table/connectors/formats/canal.md
##########
@@ -0,0 +1,175 @@
+---
+title: "Canal Format"
+nav-title: Canal
+nav-parent_id: sql-formats
+nav-pos: 5
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+<span class="label label-info">Changelog-Data-Capture Format</span>
+<span class="label label-info">Format: Deserialization Schema</span>
+
+* This will be replaced by the TOC
+{:toc}
+
+[Canal](https://github.com/alibaba/canal/wiki) is a CDC (Changelog Data 
Capture) tool that can stream changes in real-time from MySQL into other 
systems. Canal provides an unified format schema for changelog and supports to 
serialize messages using JSON and 
[protobuf](https://developers.google.com/protocol-buffers).
+
+Flink supports to interpret Canal JSON messages as INSERT/UPDATE/DELETE 
messages into Flink SQL system. This is useful in many cases to leverage this 
feature, such as synchronizing incremental data from databases to other 
systems, auditing logs, materialized views on databases, temporal join changing 
history of a database table and so on.

Review comment:
       Also give a hyperlink for CDC.

##########
File path: docs/dev/table/connectors/elasticsearch.md
##########
@@ -42,8 +42,8 @@ In order to setup the Elasticsearch connector, the following 
table provides depe
 
 | Elasticsearch Version   | Maven dependency                                   
                | SQL Client JAR         |
 | :---------------------- | 
:----------------------------------------------------------------- | 
:----------------------|
-| 6.x                     | 
`flink-connector-elasticsearch6{{site.scala_version_suffix}}`      | {% if 
site.is_stable %} 
[Download](https://repo.maven.apache.org/maven2/org/apache/flink/flink-connector-elasticsearch6{{site.scala_version_suffix}}/{{site.version}}/flink-connector-elasticsearch6{{site.scala_version_suffix}}-{{site.version}}.jar)
 {% else %} Only available for stable releases {% endif %}|
-| 7.x and later versions  | 
`flink-connector-elasticsearch7{{site.scala_version_suffix}}`      | {% if 
site.is_stable %} 
[Download](https://repo.maven.apache.org/maven2/org/apache/flink/flink-connector-elasticsearch7{{site.scala_version_suffix}}/{{site.version}}/flink-connector-elasticsearch7{{site.scala_version_suffix}}-{{site.version}}.jar)
 {% else %} Only available for stable releases {% endif %}|
+| 6.x                     | 
`flink-connector-elasticsearch6{{site.scala_version_suffix}}`      | {% if 
site.is_stable %} 
[Download](https://repo.maven.apache.org/maven2/org/apache/flink/flink-connector-elasticsearch6{{site.scala_version_suffix}}/{{site.version}}/flink-connector-elasticsearch6{{site.scala_version_suffix}}-{{site.version}}.jar)
 {% else %} Only available for [stable releases]({{ site.stable_baseurl 
}}/dev/table/connectors/kafka.html) {% endif %}|
+| 7.x and later versions  | 
`flink-connector-elasticsearch7{{site.scala_version_suffix}}`      | {% if 
site.is_stable %} 
[Download](https://repo.maven.apache.org/maven2/org/apache/flink/flink-connector-elasticsearch7{{site.scala_version_suffix}}/{{site.version}}/flink-connector-elasticsearch7{{site.scala_version_suffix}}-{{site.version}}.jar)
 {% else %} Only available for [stable releases]({{ site.stable_baseurl 
}}/dev/table/connectors/kafka.html) {% endif %}|

Review comment:
       Why kafka ?

##########
File path: docs/dev/table/connectors/formats/canal.md
##########
@@ -0,0 +1,175 @@
+---
+title: "Canal Format"
+nav-title: Canal
+nav-parent_id: sql-formats
+nav-pos: 5
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+<span class="label label-info">Changelog-Data-Capture Format</span>
+<span class="label label-info">Format: Deserialization Schema</span>
+
+* This will be replaced by the TOC
+{:toc}
+
+[Canal](https://github.com/alibaba/canal/wiki) is a CDC (Changelog Data 
Capture) tool that can stream changes in real-time from MySQL into other 
systems. Canal provides an unified format schema for changelog and supports to 
serialize messages using JSON and 
[protobuf](https://developers.google.com/protocol-buffers).
+
+Flink supports to interpret Canal JSON messages as INSERT/UPDATE/DELETE 
messages into Flink SQL system. This is useful in many cases to leverage this 
feature, such as synchronizing incremental data from databases to other 
systems, auditing logs, materialized views on databases, temporal join changing 
history of a database table and so on.

Review comment:
       Needs to illustrate that protobuf is the default format for canal.

##########
File path: docs/dev/table/connectors/formats/canal.md
##########
@@ -0,0 +1,175 @@
+---
+title: "Canal Format"
+nav-title: Canal
+nav-parent_id: sql-formats
+nav-pos: 5
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+<span class="label label-info">Changelog-Data-Capture Format</span>
+<span class="label label-info">Format: Deserialization Schema</span>
+
+* This will be replaced by the TOC
+{:toc}
+
+[Canal](https://github.com/alibaba/canal/wiki) is a CDC (Changelog Data 
Capture) tool that can stream changes in real-time from MySQL into other 
systems. Canal provides an unified format schema for changelog and supports to 
serialize messages using JSON and 
[protobuf](https://developers.google.com/protocol-buffers).
+
+Flink supports to interpret Canal JSON messages as INSERT/UPDATE/DELETE 
messages into Flink SQL system. This is useful in many cases to leverage this 
feature, such as synchronizing incremental data from databases to other 
systems, auditing logs, materialized views on databases, temporal join changing 
history of a database table and so on.

Review comment:
       `such as ... and so on` is too long sentence, can we split and use a 
list `<ul></ul>` here.

##########
File path: docs/dev/table/connectors/formats/debezium.md
##########
@@ -0,0 +1,191 @@
+---
+title: "Debezium Format"
+nav-title: Debezium
+nav-parent_id: sql-formats
+nav-pos: 4
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+<span class="label label-info">Changelog-Data-Capture Format</span>
+<span class="label label-info">Format: Deserialization Schema</span>
+
+* This will be replaced by the TOC
+{:toc}
+
+[Debezium](https://debezium.io/) is a CDC (Changelog Data Capture) tool that 
can stream changes in real-time from MySQL, PostgreSQL, Oracle, Microsoft SQL 
Server and many other databases into Kafka. Debezium provides an unified format 
schema for changelog and supports to serialize messages using JSON and [Apache 
Avro](https://avro.apache.org/).
+
+Flink supports to interpret Debezium JSON messages as INSERT/UPDATE/DELETE 
messages into Flink SQL system. This is useful in many cases to leverage this 
feature, such as synchronizing incremental data from databases to other 
systems, auditing logs, materialized views on databases, temporal join changing 
history of a database table and so on.
+
+Note: Support for interpreting Debezium Avro messages and emitting Debezium 
messages is on the roadmap.
+
+Dependencies
+------------
+
+In order to setup the Debezium format, the following table provides dependency 
information for both projects using a build automation tool (such as Maven or 
SBT) and SQL Client with SQL JAR bundles.
+
+| Maven dependency   | SQL Client JAR         |
+| :----------------- | :----------------------|
+| `flink-json`       | Built-in               |
+
+*Note: please refer to [Debezium 
documentation](https://debezium.io/documentation/reference/1.1/index.html) 
about how to setup a Debezium Kafka Connect to synchronize changelog to Kafka 
topics.*
+
+
+How to use Debezium format
+----------------
+
+Debezium provides an unified format for changelog, here is a simple example 
for an update operation captured from a MySQL `products` table:
+
+```json
+{
+  "before": {
+    "id": 111,
+    "name": "scooter",
+    "description": "Big 2-wheel scooter",
+    "weight": 5.18
+  },
+  "after": {
+    "id": 111,
+    "name": "scooter",
+    "description": "Big 2-wheel scooter",
+    "weight": 5.15
+  },
+  "source": {...},
+  "op": "u",
+  "ts_ms": 1589362330904,
+  "transaction": null
+}
+```
+
+*Note: please refer to [Debezium 
documentation](https://debezium.io/documentation/reference/1.1/connectors/mysql.html#mysql-connector-events_debezium)
 about the meaning of each fields.*
+
+The MySQL `products` table has 4 columns (`id`, `name`, `description` and 
`weight`). The above JSON message is an update change event on the `products` 
table where the `weight` value of the row with `id = 111` is changed from 
`5.18` to `5.15`.
+Assuming this messages is synchronized to Kafka topic `products_binlog`, then 
we can use the following DDL to consume this topic and interpret the change 
events.
+
+<div class="codetabs" markdown="1">
+<div data-lang="SQL" markdown="1">
+{% highlight sql %}
+CREATE TABLE topic_products (
+  -- schema is totally the same to the MySQL "products" table
+  id BIGINT,
+  name STRING,
+  description STRING,
+  weight DECIMAL(10, 2)
+) WITH (
+ 'connector' = 'kafka',
+ 'topic' = 'products_binlog',
+ 'properties.bootstrap.servers' = 'localhost:9092',
+ 'properties.group.id' = 'testGroup',
+ 'format' = 'debezium-json'  -- using debezium-json as the format
+)
+{% endhighlight %}
+</div>
+</div>
+
+In some cases, users may setup the Debezium Kafka Connect with the Kafka 
configuration `'value.converter.schemas.enable'` enabled to include schema in 
the message. Then the Debezium JSON message may look like this:
+
+```json
+{
+  "schema": {...},
+  "payload": {
+    "before": {
+      "id": 111,
+      "name": "scooter",
+      "description": "Big 2-wheel scooter",
+      "weight": 5.18
+    },
+    "after": {
+      "id": 111,
+      "name": "scooter",
+      "description": "Big 2-wheel scooter",
+      "weight": 5.15
+    },
+    "source": {...},
+    "op": "u",
+    "ts_ms": 1589362330904,
+    "transaction": null
+  }
+}
+```
+
+In order to interpret such messages, you need to add the option 
`'debezium-json.schema-include' = 'true'` into above DDL WITH clause (`false` 
by default). Usually, this is not recommended to include schema because this 
makes the messages very verbose and reduces parsing performance.
+

Review comment:
       We better give an example without schema i think.

##########
File path: docs/dev/table/connectors/formats/canal.md
##########
@@ -0,0 +1,175 @@
+---
+title: "Canal Format"
+nav-title: Canal
+nav-parent_id: sql-formats
+nav-pos: 5
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+<span class="label label-info">Changelog-Data-Capture Format</span>
+<span class="label label-info">Format: Deserialization Schema</span>
+
+* This will be replaced by the TOC
+{:toc}
+
+[Canal](https://github.com/alibaba/canal/wiki) is a CDC (Changelog Data 
Capture) tool that can stream changes in real-time from MySQL into other 
systems. Canal provides an unified format schema for changelog and supports to 
serialize messages using JSON and 
[protobuf](https://developers.google.com/protocol-buffers).
+
+Flink supports to interpret Canal JSON messages as INSERT/UPDATE/DELETE 
messages into Flink SQL system. This is useful in many cases to leverage this 
feature, such as synchronizing incremental data from databases to other 
systems, auditing logs, materialized views on databases, temporal join changing 
history of a database table and so on.
+
+Note: Support for interpreting Canal protobuf messages and emitting Canal 
messages is on the roadmap.
+
+Dependencies
+------------
+
+In order to setup the Canal format, the following table provides dependency 
information for both projects using a build automation tool (such as Maven or 
SBT) and SQL Client with SQL JAR bundles.
+
+| Maven dependency   | SQL Client JAR         |
+| :----------------- | :----------------------|
+| `flink-json`       | Built-in               |
+
+*Note: please refer to [Canal 
documentation](https://github.com/alibaba/canal/wiki) about how to deploy Canal 
to synchronize changelog to message queues.*
+
+
+How to use Canal format
+----------------
+
+Canal provides an unified format for changelog, here is a simple example for 
an update operation captured from a MySQL `products` table:
+
+```json
+{
+  "data": [
+    {
+      "id": "111",
+      "name": "scooter",
+      "description": "Big 2-wheel scooter",
+      "weight": "5.18"
+    }
+  ],
+  "database": "inventory",
+  "es": 1589373560000,
+  "id": 9,
+  "isDdl": false,
+  "mysqlType": {
+    "id": "INTEGER",
+    "name": "VARCHAR(255)",
+    "description": "VARCHAR(512)",
+    "weight": "FLOAT"
+  },
+  "old": [
+    {
+      "weight": "5.15"
+    }
+  ],
+  "pkNames": [
+    "id"
+  ],
+  "sql": "",
+  "sqlType": {
+    "id": 4,
+    "name": 12,
+    "description": 12,
+    "weight": 7
+  },
+  "table": "products",
+  "ts": 1589373560798,
+  "type": "UPDATE"
+}
+```
+
+*Note: please refer to [Canal 
documentation](https://github.com/alibaba/canal/wiki) about the meaning of each 
fields.*
+
+The MySQL `products` table has 4 columns (`id`, `name`, `description` and 
`weight`). The above JSON message is an update change event on the `products` 
table where the `weight` value of the row with `id = 111` is changed from 
`5.18` to `5.15`.
+Assuming this messages is synchronized to Kafka topic `products_binlog`, then 
we can use the following DDL to consume this topic and interpret the change 
events.
+
+<div class="codetabs" markdown="1">
+<div data-lang="SQL" markdown="1">
+{% highlight sql %}
+CREATE TABLE topic_products (
+  -- schema is totally the same to the MySQL "products" table
+  id BIGINT,
+  name STRING,
+  description STRING,
+  weight DECIMAL(10, 2)
+) WITH (
+ 'connector' = 'kafka',
+ 'topic' = 'products_binlog',
+ 'properties.bootstrap.servers' = 'localhost:9092',
+ 'properties.group.id' = 'testGroup',
+ 'format' = 'canal-json'  -- using canal-json as the format
+)
+{% endhighlight %}
+</div>
+</div>
+
+After registering the topic as a Flink table, then you can consume the Canal 
messages as a changelog source.
+
+<div class="codetabs" markdown="1">
+<div data-lang="SQL" markdown="1">
+{% highlight sql %}
+-- a real-time materialized view on the MySQL "products"
+-- which calculate the latest average of weight for the same products
+SELECT name, AVG(weight) FROM topic_products GROUP BY name;

Review comment:
       `calculate` -> `calculates`

##########
File path: docs/dev/table/connectors/formats/canal.md
##########
@@ -0,0 +1,175 @@
+---
+title: "Canal Format"
+nav-title: Canal
+nav-parent_id: sql-formats
+nav-pos: 5
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+<span class="label label-info">Changelog-Data-Capture Format</span>
+<span class="label label-info">Format: Deserialization Schema</span>
+
+* This will be replaced by the TOC
+{:toc}
+
+[Canal](https://github.com/alibaba/canal/wiki) is a CDC (Changelog Data 
Capture) tool that can stream changes in real-time from MySQL into other 
systems. Canal provides an unified format schema for changelog and supports to 
serialize messages using JSON and 
[protobuf](https://developers.google.com/protocol-buffers).
+
+Flink supports to interpret Canal JSON messages as INSERT/UPDATE/DELETE 
messages into Flink SQL system. This is useful in many cases to leverage this 
feature, such as synchronizing incremental data from databases to other 
systems, auditing logs, materialized views on databases, temporal join changing 
history of a database table and so on.
+
+Note: Support for interpreting Canal protobuf messages and emitting Canal 
messages is on the roadmap.
+
+Dependencies
+------------
+
+In order to setup the Canal format, the following table provides dependency 
information for both projects using a build automation tool (such as Maven or 
SBT) and SQL Client with SQL JAR bundles.
+
+| Maven dependency   | SQL Client JAR         |
+| :----------------- | :----------------------|
+| `flink-json`       | Built-in               |
+
+*Note: please refer to [Canal 
documentation](https://github.com/alibaba/canal/wiki) about how to deploy Canal 
to synchronize changelog to message queues.*
+
+
+How to use Canal format
+----------------
+
+Canal provides an unified format for changelog, here is a simple example for 
an update operation captured from a MySQL `products` table:
+
+```json
+{
+  "data": [
+    {
+      "id": "111",
+      "name": "scooter",
+      "description": "Big 2-wheel scooter",
+      "weight": "5.18"
+    }
+  ],
+  "database": "inventory",
+  "es": 1589373560000,
+  "id": 9,
+  "isDdl": false,
+  "mysqlType": {
+    "id": "INTEGER",
+    "name": "VARCHAR(255)",
+    "description": "VARCHAR(512)",
+    "weight": "FLOAT"
+  },
+  "old": [
+    {
+      "weight": "5.15"
+    }
+  ],
+  "pkNames": [
+    "id"
+  ],
+  "sql": "",
+  "sqlType": {
+    "id": 4,
+    "name": 12,
+    "description": 12,
+    "weight": 7
+  },
+  "table": "products",
+  "ts": 1589373560798,
+  "type": "UPDATE"
+}
+```
+
+*Note: please refer to [Canal 
documentation](https://github.com/alibaba/canal/wiki) about the meaning of each 
fields.*
+
+The MySQL `products` table has 4 columns (`id`, `name`, `description` and 
`weight`). The above JSON message is an update change event on the `products` 
table where the `weight` value of the row with `id = 111` is changed from 
`5.18` to `5.15`.
+Assuming this messages is synchronized to Kafka topic `products_binlog`, then 
we can use the following DDL to consume this topic and interpret the change 
events.
+

Review comment:
       `Assuming this messages` -> `Assumes this message has been`

##########
File path: docs/dev/table/connectors/formats/canal.md
##########
@@ -0,0 +1,175 @@
+---
+title: "Canal Format"
+nav-title: Canal
+nav-parent_id: sql-formats
+nav-pos: 5
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+<span class="label label-info">Changelog-Data-Capture Format</span>
+<span class="label label-info">Format: Deserialization Schema</span>
+
+* This will be replaced by the TOC
+{:toc}
+
+[Canal](https://github.com/alibaba/canal/wiki) is a CDC (Changelog Data 
Capture) tool that can stream changes in real-time from MySQL into other 
systems. Canal provides an unified format schema for changelog and supports to 
serialize messages using JSON and 
[protobuf](https://developers.google.com/protocol-buffers).
+
+Flink supports to interpret Canal JSON messages as INSERT/UPDATE/DELETE 
messages into Flink SQL system. This is useful in many cases to leverage this 
feature, such as synchronizing incremental data from databases to other 
systems, auditing logs, materialized views on databases, temporal join changing 
history of a database table and so on.
+
+Note: Support for interpreting Canal protobuf messages and emitting Canal 
messages is on the roadmap.
+
+Dependencies
+------------
+
+In order to setup the Canal format, the following table provides dependency 
information for both projects using a build automation tool (such as Maven or 
SBT) and SQL Client with SQL JAR bundles.
+
+| Maven dependency   | SQL Client JAR         |
+| :----------------- | :----------------------|
+| `flink-json`       | Built-in               |
+
+*Note: please refer to [Canal 
documentation](https://github.com/alibaba/canal/wiki) about how to deploy Canal 
to synchronize changelog to message queues.*
+
+
+How to use Canal format
+----------------
+
+Canal provides an unified format for changelog, here is a simple example for 
an update operation captured from a MySQL `products` table:
+
+```json
+{
+  "data": [
+    {
+      "id": "111",
+      "name": "scooter",
+      "description": "Big 2-wheel scooter",
+      "weight": "5.18"
+    }
+  ],
+  "database": "inventory",
+  "es": 1589373560000,
+  "id": 9,
+  "isDdl": false,
+  "mysqlType": {
+    "id": "INTEGER",
+    "name": "VARCHAR(255)",
+    "description": "VARCHAR(512)",
+    "weight": "FLOAT"
+  },
+  "old": [
+    {
+      "weight": "5.15"
+    }
+  ],
+  "pkNames": [
+    "id"
+  ],
+  "sql": "",
+  "sqlType": {
+    "id": 4,
+    "name": 12,
+    "description": 12,
+    "weight": 7
+  },
+  "table": "products",
+  "ts": 1589373560798,
+  "type": "UPDATE"
+}
+```
+
+*Note: please refer to [Canal 
documentation](https://github.com/alibaba/canal/wiki) about the meaning of each 
fields.*
+
+The MySQL `products` table has 4 columns (`id`, `name`, `description` and 
`weight`). The above JSON message is an update change event on the `products` 
table where the `weight` value of the row with `id = 111` is changed from 
`5.18` to `5.15`.
+Assuming this messages is synchronized to Kafka topic `products_binlog`, then 
we can use the following DDL to consume this topic and interpret the change 
events.
+
+<div class="codetabs" markdown="1">
+<div data-lang="SQL" markdown="1">
+{% highlight sql %}
+CREATE TABLE topic_products (
+  -- schema is totally the same to the MySQL "products" table
+  id BIGINT,
+  name STRING,
+  description STRING,
+  weight DECIMAL(10, 2)
+) WITH (
+ 'connector' = 'kafka',
+ 'topic' = 'products_binlog',
+ 'properties.bootstrap.servers' = 'localhost:9092',
+ 'properties.group.id' = 'testGroup',
+ 'format' = 'canal-json'  -- using canal-json as the format
+)
+{% endhighlight %}
+</div>
+</div>
+
+After registering the topic as a Flink table, then you can consume the Canal 
messages as a changelog source.
+

Review comment:
       Remove `then`.

##########
File path: docs/dev/table/connectors/formats/debezium.md
##########
@@ -0,0 +1,191 @@
+---
+title: "Debezium Format"
+nav-title: Debezium
+nav-parent_id: sql-formats
+nav-pos: 4
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+<span class="label label-info">Changelog-Data-Capture Format</span>
+<span class="label label-info">Format: Deserialization Schema</span>
+
+* This will be replaced by the TOC
+{:toc}
+
+[Debezium](https://debezium.io/) is a CDC (Changelog Data Capture) tool that 
can stream changes in real-time from MySQL, PostgreSQL, Oracle, Microsoft SQL 
Server and many other databases into Kafka. Debezium provides an unified format 
schema for changelog and supports to serialize messages using JSON and [Apache 
Avro](https://avro.apache.org/).
+
+Flink supports to interpret Debezium JSON messages as INSERT/UPDATE/DELETE 
messages into Flink SQL system. This is useful in many cases to leverage this 
feature, such as synchronizing incremental data from databases to other 
systems, auditing logs, materialized views on databases, temporal join changing 
history of a database table and so on.
+
+Note: Support for interpreting Debezium Avro messages and emitting Debezium 
messages is on the roadmap.
+
+Dependencies
+------------
+
+In order to setup the Debezium format, the following table provides dependency 
information for both projects using a build automation tool (such as Maven or 
SBT) and SQL Client with SQL JAR bundles.
+
+| Maven dependency   | SQL Client JAR         |
+| :----------------- | :----------------------|
+| `flink-json`       | Built-in               |
+
+*Note: please refer to [Debezium 
documentation](https://debezium.io/documentation/reference/1.1/index.html) 
about how to setup a Debezium Kafka Connect to synchronize changelog to Kafka 
topics.*
+
+
+How to use Debezium format
+----------------
+
+Debezium provides an unified format for changelog, here is a simple example 
for an update operation captured from a MySQL `products` table:
+
+```json
+{
+  "before": {
+    "id": 111,
+    "name": "scooter",
+    "description": "Big 2-wheel scooter",
+    "weight": 5.18
+  },
+  "after": {
+    "id": 111,
+    "name": "scooter",
+    "description": "Big 2-wheel scooter",
+    "weight": 5.15
+  },
+  "source": {...},
+  "op": "u",
+  "ts_ms": 1589362330904,
+  "transaction": null
+}
+```
+
+*Note: please refer to [Debezium 
documentation](https://debezium.io/documentation/reference/1.1/connectors/mysql.html#mysql-connector-events_debezium)
 about the meaning of each fields.*
+
+The MySQL `products` table has 4 columns (`id`, `name`, `description` and 
`weight`). The above JSON message is an update change event on the `products` 
table where the `weight` value of the row with `id = 111` is changed from 
`5.18` to `5.15`.
+Assuming this messages is synchronized to Kafka topic `products_binlog`, then 
we can use the following DDL to consume this topic and interpret the change 
events.
+
+<div class="codetabs" markdown="1">
+<div data-lang="SQL" markdown="1">
+{% highlight sql %}
+CREATE TABLE topic_products (
+  -- schema is totally the same to the MySQL "products" table
+  id BIGINT,
+  name STRING,
+  description STRING,
+  weight DECIMAL(10, 2)
+) WITH (
+ 'connector' = 'kafka',
+ 'topic' = 'products_binlog',
+ 'properties.bootstrap.servers' = 'localhost:9092',
+ 'properties.group.id' = 'testGroup',
+ 'format' = 'debezium-json'  -- using debezium-json as the format
+)
+{% endhighlight %}
+</div>
+</div>
+
+In some cases, users may setup the Debezium Kafka Connect with the Kafka 
configuration `'value.converter.schemas.enable'` enabled to include schema in 
the message. Then the Debezium JSON message may look like this:
+
+```json
+{
+  "schema": {...},
+  "payload": {
+    "before": {
+      "id": 111,
+      "name": "scooter",
+      "description": "Big 2-wheel scooter",
+      "weight": 5.18
+    },
+    "after": {
+      "id": 111,
+      "name": "scooter",
+      "description": "Big 2-wheel scooter",
+      "weight": 5.15
+    },
+    "source": {...},
+    "op": "u",
+    "ts_ms": 1589362330904,
+    "transaction": null
+  }
+}
+```
+
+In order to interpret such messages, you need to add the option 
`'debezium-json.schema-include' = 'true'` into above DDL WITH clause (`false` 
by default). Usually, this is not recommended to include schema because this 
makes the messages very verbose and reduces parsing performance.
+
+After registering the topic as a Flink table, then you can consume the 
Debezium messages as a changelog source.
+
+<div class="codetabs" markdown="1">
+<div data-lang="SQL" markdown="1">
+{% highlight sql %}
+-- a real-time materialized view on the MySQL "products"
+-- which calculate the latest average of weight for the same products
+SELECT name, AVG(weight) FROM topic_products GROUP BY name;
+
+-- synchronize all the data and incremental changes of MySQL "products" table 
to
+-- Elasticsearch "products" index for future searching
+INSERT INTO elasticsearch_products
+SELECT * FROM topic_products;
+{% endhighlight %}
+</div>
+</div>
+
+
+Format Options
+----------------
+
+<table class="table table-bordered">
+    <thead>
+      <tr>
+        <th class="text-left" style="width: 25%">Option</th>
+        <th class="text-center" style="width: 8%">Required</th>
+        <th class="text-center" style="width: 7%">Default</th>
+        <th class="text-center" style="width: 10%">Type</th>
+        <th class="text-center" style="width: 50%">Description</th>
+      </tr>
+    </thead>
+    <tbody>
+    <tr>
+      <td><h5>format</h5></td>
+      <td>required</td>
+      <td style="word-wrap: break-word;">(none)</td>
+      <td>String</td>
+      <td>Specify what format to use, here should be 
<code>'debezium-json'</code>.</td>
+    </tr>
+    <tr>
+      <td><h5>debezium-json.schema-include</h5></td>
+      <td>optional</td>
+      <td style="word-wrap: break-word;">false</td>
+      <td>Boolean</td>
+      <td>When setting up a Debezium Kafka Connect, users may enable a Kafka 
configuration <code>'value.converter.schemas.enable'</code> to include schema 
in the message.
+          This option indicates whether the Debezium JSON message includes the 
schema or not. </td>

Review comment:
       `Connect` -> `connect ` or `connector`




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to