This is an automated email from the ASF dual-hosted git repository. leesf pushed a commit to branch asf-site in repository https://gitbox.apache.org/repos/asf/incubator-hudi.git
The following commit(s) were added to refs/heads/asf-site by this push: new f063d5f [HUDI-823] fix typo (#1545) f063d5f is described below commit f063d5f4b6024b81154ed76fe61e15c9eccf5493 Author: wanglisheng81 <37138788+wanglishen...@users.noreply.github.com> AuthorDate: Tue Apr 21 19:31:39 2020 +0800 [HUDI-823] fix typo (#1545) --- docs/_docs/1_1_quick_start_guide.cn.md | 4 ++-- docs/_docs/1_1_quick_start_guide.md | 2 +- 2 files changed, 3 insertions(+), 3 deletions(-) diff --git a/docs/_docs/1_1_quick_start_guide.cn.md b/docs/_docs/1_1_quick_start_guide.cn.md index d8bf2e7..7404bb8 100644 --- a/docs/_docs/1_1_quick_start_guide.cn.md +++ b/docs/_docs/1_1_quick_start_guide.cn.md @@ -54,7 +54,7 @@ df.write.format("org.apache.hudi"). `mode(Overwrite)`覆盖并重新创建数据集(如果已经存在)。 您可以检查在`/tmp/hudi_cow_table/<region>/<country>/<city>/`下生成的数据。我们提供了一个记录键 -([schema](#sample-schema)中的`uuid`),分区字段(`region/county/city`)和组合逻辑([schema](#sample-schema)中的`ts`) +([schema](#sample-schema)中的`uuid`),分区字段(`region/country/city`)和组合逻辑([schema](#sample-schema)中的`ts`) 以确保行程记录在每个分区中都是唯一的。更多信息请参阅 [对Hudi中的数据进行建模](https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=113709185#FAQ-HowdoImodelthedatastoredinHudi), 有关将数据提取到Hudi中的方法的信息,请参阅[写入Hudi数据集](/cn/docs/writing_data.html)。 @@ -158,4 +158,4 @@ spark.sql("select `_hoodie_commit_time`, fare, begin_lon, begin_lat, ts from hu 这里我们使用Spark演示了Hudi的功能。但是,Hudi可以支持多种存储类型/视图,并且可以从Hive,Spark,Presto等查询引擎中查询Hudi数据集。 我们制作了一个基于Docker设置、所有依赖系统都在本地运行的[演示视频](https://www.youtube.com/watch?v=VhNgUsxdrD0), 我们建议您复制相同的设置然后按照[这里](/cn/docs/docker_demo.html)的步骤自己运行这个演示。 -另外,如果您正在寻找将现有数据迁移到Hudi的方法,请参考[迁移指南](/cn/docs/migration_guide.html)。 \ No newline at end of file +另外,如果您正在寻找将现有数据迁移到Hudi的方法,请参考[迁移指南](/cn/docs/migration_guide.html)。 diff --git a/docs/_docs/1_1_quick_start_guide.md b/docs/_docs/1_1_quick_start_guide.md index 8111acf..08269ec 100644 --- a/docs/_docs/1_1_quick_start_guide.md +++ b/docs/_docs/1_1_quick_start_guide.md @@ -70,7 +70,7 @@ df.write.format("hudi"). `mode(Overwrite)` overwrites and recreates the table if it already exists. You can check the data generated under `/tmp/hudi_trips_cow/<region>/<country>/<city>/`. We provided a record key -(`uuid` in [schema](https://github.com/apache/incubator-hudi/blob/master/hudi-spark/src/main/java/org/apache/hudi/QuickstartUtils.java#L58)), partition field (`region/county/city`) and combine logic (`ts` in +(`uuid` in [schema](https://github.com/apache/incubator-hudi/blob/master/hudi-spark/src/main/java/org/apache/hudi/QuickstartUtils.java#L58)), partition field (`region/country/city`) and combine logic (`ts` in [schema](https://github.com/apache/incubator-hudi/blob/master/hudi-spark/src/main/java/org/apache/hudi/QuickstartUtils.java#L58)) to ensure trip records are unique within each partition. For more info, refer to [Modeling data stored in Hudi](https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=113709185#FAQ-HowdoImodelthedatastoredinHudi) and for info on ways to ingest data into Hudi, refer to [Writing Hudi Tables](/docs/writing_data.html).