This is an automated email from the ASF dual-hosted git repository.

danny0405 pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/hudi.git


The following commit(s) were added to refs/heads/asf-site by this push:
     new 48119f1  [minor] fix flink notification for streaming ingestion and 
quick start (#4290)
48119f1 is described below

commit 48119f1fa4ee104f92d41311bbb7a4d1d520abe2
Author: Danny Chan <[email protected]>
AuthorDate: Sun Dec 12 11:16:34 2021 +0800

    [minor] fix flink notification for streaming ingestion and quick start 
(#4290)
---
 website/docs/flink-quick-start-guide.md                          | 4 +---
 website/docs/hoodie_deltastreamer.md                             | 6 +++---
 website/versioned_docs/version-0.10.0/flink-quick-start-guide.md | 4 +---
 website/versioned_docs/version-0.10.0/hoodie_deltastreamer.md    | 6 +++---
 4 files changed, 8 insertions(+), 12 deletions(-)

diff --git a/website/docs/flink-quick-start-guide.md 
b/website/docs/flink-quick-start-guide.md
index 2feeba6..347dbad 100644
--- a/website/docs/flink-quick-start-guide.md
+++ b/website/docs/flink-quick-start-guide.md
@@ -110,7 +110,6 @@ select * from t1;
 
 This query provides snapshot querying of the ingested data. 
 Refer to [Table types and queries](/docs/concepts#table-types--queries) for 
more info on all table types and query types supported.
-{: .notice--info}
 
 ### Update Data
 
@@ -124,8 +123,7 @@ insert into t1 values
 
 Notice that the save mode is now `Append`. In general, always use append mode 
unless you are trying to create the table for the first time.
 [Querying](#query-data) the data again will now show updated records. Each 
write operation generates a new [commit](/docs/concepts) 
-denoted by the timestamp. Look for changes in `_hoodie_commit_time`, `age` 
fields for the same `_hoodie_record_key`s in previous commit. 
-{: .notice--info}
+denoted by the timestamp. Look for changes in `_hoodie_commit_time`, `age` 
fields for the same `_hoodie_record_key`s in previous commit.
 
 ### Streaming Query
 
diff --git a/website/docs/hoodie_deltastreamer.md 
b/website/docs/hoodie_deltastreamer.md
index 3129593..a979788 100644
--- a/website/docs/hoodie_deltastreamer.md
+++ b/website/docs/hoodie_deltastreamer.md
@@ -353,7 +353,7 @@ We recommend two ways for syncing CDC data into Hudi:
 - If the upstream data cannot guarantee the order, you need to specify option 
`write.precombine.field` explicitly;
 - The MOR table can not handle DELETEs in event time sequence now, thus 
causing data loss. You better switch on the changelog mode through
   option `changelog.enabled`.
-  :::
+:::
 
 ### Bulk Insert
 
@@ -418,7 +418,7 @@ and then reduce the resources to write `incremental data` 
(or open the rate limi
 2. Index bootstrap triggers by the input data. User need to ensure that there 
is at least one record in each partition.
 3. Index bootstrap executes concurrently. User can search in log by `finish 
loading the index under partition` and `Load record form file` to observe the 
progress of index bootstrap.
 4. The first successful checkpoint indicates that the index bootstrap 
completed. There is no need to load the index again when recovering from the 
checkpoint.
-   :::
+:::
 
 ### Changelog Mode
 Hudi can keep all the intermediate changes (I / -U / U / D) of messages, then 
consumes through stateful computing of flink to have a near-real-time
@@ -455,7 +455,7 @@ The small file strategy lead to performance degradation. If 
you want to apply th
 
 ### Rate Limit
 There are many use cases that user put the full history data set onto the 
message queue together with the realtime incremental data. Then they consume 
the data from the queue into the hudi from the earliest offset using flink. 
Consuming history data set has these characteristics:
-1). The instant throughput is huge 2). It has serious disorder (with random 
writing partitions). It will lead to degradation of writing performance and 
throughput glitches. At this time, the speed limit parameter can be turned on 
to ensure smooth writing of the flow.
+1). The instant throughput is huge 2). It has serious disorder (with random 
writing partitions). It will lead to degradation of writing performance and 
throughput glitches. For this case, the speed limit parameter can be turned on 
to ensure smooth writing of the flow.
 
 #### Options
 |  Option Name  | Required | Default | Remarks |
diff --git a/website/versioned_docs/version-0.10.0/flink-quick-start-guide.md 
b/website/versioned_docs/version-0.10.0/flink-quick-start-guide.md
index 2feeba6..347dbad 100644
--- a/website/versioned_docs/version-0.10.0/flink-quick-start-guide.md
+++ b/website/versioned_docs/version-0.10.0/flink-quick-start-guide.md
@@ -110,7 +110,6 @@ select * from t1;
 
 This query provides snapshot querying of the ingested data. 
 Refer to [Table types and queries](/docs/concepts#table-types--queries) for 
more info on all table types and query types supported.
-{: .notice--info}
 
 ### Update Data
 
@@ -124,8 +123,7 @@ insert into t1 values
 
 Notice that the save mode is now `Append`. In general, always use append mode 
unless you are trying to create the table for the first time.
 [Querying](#query-data) the data again will now show updated records. Each 
write operation generates a new [commit](/docs/concepts) 
-denoted by the timestamp. Look for changes in `_hoodie_commit_time`, `age` 
fields for the same `_hoodie_record_key`s in previous commit. 
-{: .notice--info}
+denoted by the timestamp. Look for changes in `_hoodie_commit_time`, `age` 
fields for the same `_hoodie_record_key`s in previous commit.
 
 ### Streaming Query
 
diff --git a/website/versioned_docs/version-0.10.0/hoodie_deltastreamer.md 
b/website/versioned_docs/version-0.10.0/hoodie_deltastreamer.md
index 3129593..a979788 100644
--- a/website/versioned_docs/version-0.10.0/hoodie_deltastreamer.md
+++ b/website/versioned_docs/version-0.10.0/hoodie_deltastreamer.md
@@ -353,7 +353,7 @@ We recommend two ways for syncing CDC data into Hudi:
 - If the upstream data cannot guarantee the order, you need to specify option 
`write.precombine.field` explicitly;
 - The MOR table can not handle DELETEs in event time sequence now, thus 
causing data loss. You better switch on the changelog mode through
   option `changelog.enabled`.
-  :::
+:::
 
 ### Bulk Insert
 
@@ -418,7 +418,7 @@ and then reduce the resources to write `incremental data` 
(or open the rate limi
 2. Index bootstrap triggers by the input data. User need to ensure that there 
is at least one record in each partition.
 3. Index bootstrap executes concurrently. User can search in log by `finish 
loading the index under partition` and `Load record form file` to observe the 
progress of index bootstrap.
 4. The first successful checkpoint indicates that the index bootstrap 
completed. There is no need to load the index again when recovering from the 
checkpoint.
-   :::
+:::
 
 ### Changelog Mode
 Hudi can keep all the intermediate changes (I / -U / U / D) of messages, then 
consumes through stateful computing of flink to have a near-real-time
@@ -455,7 +455,7 @@ The small file strategy lead to performance degradation. If 
you want to apply th
 
 ### Rate Limit
 There are many use cases that user put the full history data set onto the 
message queue together with the realtime incremental data. Then they consume 
the data from the queue into the hudi from the earliest offset using flink. 
Consuming history data set has these characteristics:
-1). The instant throughput is huge 2). It has serious disorder (with random 
writing partitions). It will lead to degradation of writing performance and 
throughput glitches. At this time, the speed limit parameter can be turned on 
to ensure smooth writing of the flow.
+1). The instant throughput is huge 2). It has serious disorder (with random 
writing partitions). It will lead to degradation of writing performance and 
throughput glitches. For this case, the speed limit parameter can be turned on 
to ensure smooth writing of the flow.
 
 #### Options
 |  Option Name  | Required | Default | Remarks |

Reply via email to