This is an automated email from the ASF dual-hosted git repository.
vinoth pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-hudi.git
The following commit(s) were added to refs/heads/asf-site by this push:
new f1d9a80 Travis CI build asf-site
f1d9a80 is described below
commit f1d9a8088b8bc46242325dc1eb849ad96c240a60
Author: CI <[email protected]>
AuthorDate: Wed May 6 11:23:13 2020 +0000
Travis CI build asf-site
---
content/docs/use_cases.html | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/content/docs/use_cases.html b/content/docs/use_cases.html
index 640c785..01ea086 100644
--- a/content/docs/use_cases.html
+++ b/content/docs/use_cases.html
@@ -382,7 +382,7 @@ Unfortunately, in today’s post-mobile & pre-IoT world,
<strong>late data f
In such cases, the only remedy to guarantee correctness is to <a
href="https://falcon.apache.org/FalconDocumentation.html#Handling_late_input_data">reprocess
the last few hours</a> worth of data,
over and over again each hour, which can significantly hurt the efficiency
across the entire ecosystem. For e.g; imagine reprocessing TBs worth of data
every hour across hundreds of workflows.</p>
-<p>Hudi comes to the rescue again, by providing a way to consume new data
(including late data) from an upsteam Hudi table <code
class="highlighter-rouge">HU</code> at a record granularity (not
folders/partitions),
+<p>Hudi comes to the rescue again, by providing a way to consume new data
(including late data) from an upstream Hudi table <code
class="highlighter-rouge">HU</code> at a record granularity (not
folders/partitions),
apply the processing logic, and efficiently update/reconcile late data with a
downstream Hudi table <code class="highlighter-rouge">HD</code>. Here, <code
class="highlighter-rouge">HU</code> and <code
class="highlighter-rouge">HD</code> can be continuously scheduled at a much
more frequent schedule
like 15 mins, and providing an end-end latency of 30 mins at <code
class="highlighter-rouge">HD</code>.</p>