This is an automated email from the ASF dual-hosted git repository.

chesnay pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/flink-web.git


The following commit(s) were added to refs/heads/asf-site by this push:
     new 3650a4ab2 [hotfix] Remove duplicate word
3650a4ab2 is described below

commit 3650a4ab21faa3de8025032a739e89148d573880
Author: Eeeddieee <[email protected]>
AuthorDate: Tue Nov 15 19:38:07 2022 +0800

    [hotfix] Remove duplicate word
---
 content/usecases.html | 2 +-
 usecases.md           | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/content/usecases.html b/content/usecases.html
index bb482829e..82dabd752 100644
--- a/content/usecases.html
+++ b/content/usecases.html
@@ -320,7 +320,7 @@
 
 <h3 id="what-are-data-pipelines">What are data pipelines?</h3>
 
-<p>Extract-transform-load (ETL) is a common approach to convert and move data 
between storage systems. Often ETL jobs are periodically triggered to copy data 
from from transactional database systems to an analytical database or a data 
warehouse.</p>
+<p>Extract-transform-load (ETL) is a common approach to convert and move data 
between storage systems. Often ETL jobs are periodically triggered to copy data 
from transactional database systems to an analytical database or a data 
warehouse.</p>
 
 <p>Data pipelines serve a similar purpose as ETL jobs. They transform and 
enrich data and can move it from one storage system to another. However, they 
operate in a continuous streaming mode instead of being periodically triggered. 
Hence, they are able to read records from sources that continuously produce 
data and move it with low latency to their destination. For example a data 
pipeline might monitor a file system directory for new files and write their 
data into an event log. Another  [...]
 
diff --git a/usecases.md b/usecases.md
index c06be33d1..8fad8a379 100644
--- a/usecases.md
+++ b/usecases.md
@@ -80,7 +80,7 @@ Flink provides very good support for continuous streaming as 
well as batch analy
 
 ### What are data pipelines?
 
-Extract-transform-load (ETL) is a common approach to convert and move data 
between storage systems. Often ETL jobs are periodically triggered to copy data 
from from transactional database systems to an analytical database or a data 
warehouse.
+Extract-transform-load (ETL) is a common approach to convert and move data 
between storage systems. Often ETL jobs are periodically triggered to copy data 
from transactional database systems to an analytical database or a data 
warehouse.
 
 Data pipelines serve a similar purpose as ETL jobs. They transform and enrich 
data and can move it from one storage system to another. However, they operate 
in a continuous streaming mode instead of being periodically triggered. Hence, 
they are able to read records from sources that continuously produce data and 
move it with low latency to their destination. For example a data pipeline 
might monitor a file system directory for new files and write their data into 
an event log. Another app [...]
 

Reply via email to