This is an automated email from the ASF dual-hosted git repository.
wangxianghu pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/hudi.git
The following commit(s) were added to refs/heads/asf-site by this push:
new 2c22215 [MINOR][DOCS] Fix hdfs namenode explorer link (#3463)
2c22215 is described below
commit 2c22215bfdd0743c9b164dcbf13d062460fbb861
Author: yangrong688 <[email protected]>
AuthorDate: Fri Aug 13 17:48:41 2021 +0800
[MINOR][DOCS] Fix hdfs namenode explorer link (#3463)
add .html suffix to make the namenode explorer link work normally
---
website/docs/docker_demo.md | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/website/docs/docker_demo.md b/website/docs/docker_demo.md
index 98b0417..0f1a194 100644
--- a/website/docs/docker_demo.md
+++ b/website/docs/docker_demo.md
@@ -189,13 +189,13 @@ exit
```
You can use HDFS web-browser to look at the tables
-`http://namenode:50070/explorer#/user/hive/warehouse/stock_ticks_cow`.
+`http://namenode:50070/explorer.html#/user/hive/warehouse/stock_ticks_cow`.
You can explore the new partition folder created in the table along with a
"commit" / "deltacommit"
file under .hoodie which signals a successful commit.
There will be a similar setup when you browse the MOR table
-`http://namenode:50070/explorer#/user/hive/warehouse/stock_ticks_mor`
+`http://namenode:50070/explorer.html#/user/hive/warehouse/stock_ticks_mor`
### Step 3: Sync with Hive
@@ -584,10 +584,10 @@ exit
```
With Copy-On-Write table, the second ingestion by DeltaStreamer resulted in a
new version of Parquet file getting created.
-See
`http://namenode:50070/explorer#/user/hive/warehouse/stock_ticks_cow/2018/08/31`
+See
`http://namenode:50070/explorer.html#/user/hive/warehouse/stock_ticks_cow/2018/08/31`
With Merge-On-Read table, the second ingestion merely appended the batch to an
unmerged delta (log) file.
-Take a look at the HDFS filesystem to get an idea:
`http://namenode:50070/explorer#/user/hive/warehouse/stock_ticks_mor/2018/08/31`
+Take a look at the HDFS filesystem to get an idea:
`http://namenode:50070/explorer.html#/user/hive/warehouse/stock_ticks_mor/2018/08/31`
### Step 6 (a): Run Hive Queries