This is an automated email from the ASF dual-hosted git repository.
adoroszlai pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/ozone.git
The following commit(s) were added to refs/heads/master by this push:
new 17d86f1 HDDS-6361. Modify docs build flow to replace image tags with
shortcodes. (#3122)
17d86f1 is described below
commit 17d86f179244a9cf9bd331c136e041dcb80cdfb6
Author: Jyotinder Singh <[email protected]>
AuthorDate: Fri Feb 25 10:25:55 2022 +0530
HDDS-6361. Modify docs build flow to replace image tags with shortcodes.
(#3122)
---
hadoop-hdds/docs/content/concept/Containers.md | 3 +-
hadoop-hdds/docs/content/concept/Datanodes.md | 4 +-
hadoop-hdds/docs/content/concept/OzoneManager.md | 6 +--
hadoop-hdds/docs/content/concept/Recon.md | 9 ++--
hadoop-hdds/docs/content/feature/OM-HA.md | 4 +-
hadoop-hdds/docs/content/feature/PrefixFSO.md | 6 +--
hadoop-hdds/docs/content/feature/SCM-HA.md | 2 +-
hadoop-hdds/docs/dev-support/bin/generate-site.sh | 12 ++++-
.../docs/dev-support/bin/make_images_responsive.py | 57 ++++++++++++++++++++++
.../themes/ozonedoc/layouts/shortcodes/image.html | 2 +-
10 files changed, 84 insertions(+), 21 deletions(-)
diff --git a/hadoop-hdds/docs/content/concept/Containers.md
b/hadoop-hdds/docs/content/concept/Containers.md
index b5894a6..e1e1ad5 100644
--- a/hadoop-hdds/docs/content/concept/Containers.md
+++ b/hadoop-hdds/docs/content/concept/Containers.md
@@ -28,8 +28,7 @@ Containers are the fundamental replication unit of
Ozone/HDDS, they are managed
Containers are big binary units (5Gb by default) which can contain multiple
blocks:
-{{< image src="Containers.png">}}
-
+
Blocks are local information and not managed by SCM. Therefore even if
billions of small files are created in the system (which means billions of
blocks are created), only of the status of the containers will be reported by
the Datanodes and containers will be replicated.
When Ozone Manager requests a new Block allocation from the SCM, SCM will
identify the suitable container and generate a block id which contains
`ContainerId` + `LocalId`. Client will connect to the Datanode which stores the
Container, and datanode can manage the separated block based on the `LocalId`.
diff --git a/hadoop-hdds/docs/content/concept/Datanodes.md
b/hadoop-hdds/docs/content/concept/Datanodes.md
index 4372da6..1149eba 100644
--- a/hadoop-hdds/docs/content/concept/Datanodes.md
+++ b/hadoop-hdds/docs/content/concept/Datanodes.md
@@ -31,7 +31,7 @@ about the blocks written by the clients.
## Storage Containers
-{{< image src="ContainerMetadata.png">}}
+
A storage container is a self-contained super block. It has a list of Ozone
blocks that reside inside it, as well as on-disk files which contain the
@@ -50,7 +50,7 @@ that make up that key.
An Ozone block contains the container ID and a local ID. The figure below
shows the logical layout out of Ozone block.
-{{< image src="OzoneBlock.png">}}
+
The container ID lets the clients discover the location of the container. The
authoritative information about where a container is located is with the
diff --git a/hadoop-hdds/docs/content/concept/OzoneManager.md
b/hadoop-hdds/docs/content/concept/OzoneManager.md
index 50bf441..b05d314 100644
--- a/hadoop-hdds/docs/content/concept/OzoneManager.md
+++ b/hadoop-hdds/docs/content/concept/OzoneManager.md
@@ -24,7 +24,7 @@ summary: Ozone Manager is the principal name space service of
Ozone. OM manages
limitations under the License.
-->
-{{< image src="OzoneManager.png">}}
+
Ozone Manager (OM) is the namespace manager for Ozone.
@@ -60,7 +60,7 @@ understood if we trace what happens during a key write and
key read.
### Key Write
-{{< image src="OzoneManager-WritePath.png">}}
+
* To write a key to Ozone, a client tells Ozone manager that it would like to
write a key into a bucket that lives inside a specific volume. Once Ozone
@@ -84,7 +84,7 @@ information on Ozone manager.
### Key Reads
-{{< image src="OzoneManager-ReadPath.png">}}
+
* Key reads are simpler, the client requests the block list from the Ozone
Manager
diff --git a/hadoop-hdds/docs/content/concept/Recon.md
b/hadoop-hdds/docs/content/concept/Recon.md
index e3f6350..064127a 100644
--- a/hadoop-hdds/docs/content/concept/Recon.md
+++ b/hadoop-hdds/docs/content/concept/Recon.md
@@ -31,8 +31,7 @@ the current state of the cluster through REST based APIs and
rich web UI.
## High Level Design
-{{< image src="/concept/ReconHighLevelDesign.png">}}
-
+
<br/>
On a high level, Recon collects and aggregates metadata from Ozone Manager
(OM),
@@ -50,8 +49,7 @@ the web UI.
## Recon and Ozone Manager
-{{< image src="/concept/ReconOmDesign.png">}}
-
+
<br/>
Recon gets a full snapshot of OM rocks db initially from the leader OM's HTTP
@@ -68,8 +66,7 @@ further processing by OM db tasks via [Recon Task
Framework](#task-framework).
## Recon and Storage Container Manager
-{{< image src="/concept/ReconScmDesign.png">}}
-
+
<br/>
Recon also acts as a passive SCM for datanodes. When Recon is configured in the
diff --git a/hadoop-hdds/docs/content/feature/OM-HA.md
b/hadoop-hdds/docs/content/feature/OM-HA.md
index 0719a29..573ce77 100644
--- a/hadoop-hdds/docs/content/feature/OM-HA.md
+++ b/hadoop-hdds/docs/content/feature/OM-HA.md
@@ -35,7 +35,7 @@ This document explain the HA setup of Ozone Manager (OM) HA,
please check [this
A single Ozone Manager uses [RocksDB](https://github.com/facebook/rocksdb/) to
persist metadata (volumes, buckets, keys) locally. HA version of Ozone Manager
does exactly the same but all the data is replicated with the help of the RAFT
consensus algorithm to follower Ozone Manager instances.
-{{< image src="HA-OM.png">}}
+
Client connects to the Leader Ozone Manager which process the request and
schedule the replication with RAFT. When the request is replicated to all the
followers the leader can return with the response.
@@ -106,7 +106,7 @@ Raft can guarantee the replication of any request if the
request is persisted to
RocksDB instance are updated by a background thread with batching transactions
(so called "double buffer" as when one of the buffers is used to commit the
data the other one collects all the new requests for the next commit.) To make
all data available for the next request even if the background process is not
yet wrote them the key data is cached in the memory.
-{{< image src="HA-OM-doublebuffer.png">}}
+
The details of this approach discussed in a separated [design doc]({{< ref
"design/omha.md" >}}) but it's integral part of the OM HA design.
diff --git a/hadoop-hdds/docs/content/feature/PrefixFSO.md
b/hadoop-hdds/docs/content/feature/PrefixFSO.md
index c51c674..7d87b26 100644
--- a/hadoop-hdds/docs/content/feature/PrefixFSO.md
+++ b/hadoop-hdds/docs/content/feature/PrefixFSO.md
@@ -47,20 +47,20 @@ Optimized (FSO) buckets, OM metadata format stores
intermediate directories into
into `FileTable` as shown in the below picture. The key to the table is the
name of a directory or a file prefixed by
the unique identifier of its parent directory, `<parent unique-id>/<filename>`.
-{{< image src="PrefixFSO-Format.png">}}
+
### Directory delete operation with prefix layout: ###
Following picture describes the OM metadata changes while performing a delete
operation on a directory.
-{{< image src="PrefixFSO-Delete.png">}}
+
### Directory rename operation with prefix layout: ###
Following picture describes the OM metadata changes while performing a rename
operation on a directory.
-{{< image src="PrefixFSO-Rename.png">}}
+
## Configuration
diff --git a/hadoop-hdds/docs/content/feature/SCM-HA.md
b/hadoop-hdds/docs/content/feature/SCM-HA.md
index 78aafcf..ebbe998 100644
--- a/hadoop-hdds/docs/content/feature/SCM-HA.md
+++ b/hadoop-hdds/docs/content/feature/SCM-HA.md
@@ -109,7 +109,7 @@ Based on the `ozone.scm.primordial.node.id`, the init
process will be ignored on
## SCM HA Security
-{{< image src="scm-secure-ha.png">}}
+
In a secure SCM HA cluster on the SCM where we perform init, we call this SCM
as a primordial SCM.
Primordial SCM starts root-CA with self-signed certificates and is used to
issue a signed certificate
diff --git a/hadoop-hdds/docs/dev-support/bin/generate-site.sh
b/hadoop-hdds/docs/dev-support/bin/generate-site.sh
index 3d7baa8..1556f95 100755
--- a/hadoop-hdds/docs/dev-support/bin/generate-site.sh
+++ b/hadoop-hdds/docs/dev-support/bin/generate-site.sh
@@ -31,8 +31,18 @@ if git -C $(pwd) status >& /dev/null; then
ENABLE_GIT_INFO="--enableGitInfo"
fi
+# Copy docs files to a temporary directory inside target
+# for pre-processing the markdown files.
+TMPDIR="$DOCDIR/target/tmp"
+mkdir -p "$TMPDIR"
+rsync -a --exclude="target" --exclude="public" "$DOCDIR/" "$TMPDIR"
+
+# Replace all markdown images with a hugo shortcode to make them responsive.
+python3 $DIR/make_images_responsive.py $TMPDIR
+
DESTDIR="$DOCDIR/target/classes/docs"
mkdir -p "$DESTDIR"
-cd "$DOCDIR"
+# We want to build the processed files inside the $DOCDIR/target/tmp
+cd "$TMPDIR"
hugo "${ENABLE_GIT_INFO}" -d "$DESTDIR" "$@"
cd -
diff --git a/hadoop-hdds/docs/dev-support/bin/make_images_responsive.py
b/hadoop-hdds/docs/dev-support/bin/make_images_responsive.py
new file mode 100644
index 0000000..4c945eb
--- /dev/null
+++ b/hadoop-hdds/docs/dev-support/bin/make_images_responsive.py
@@ -0,0 +1,57 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements. See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import sys
+import os
+import re
+import logging
+
+LOGLEVEL = os.environ.get('LOGLEVEL', 'WARNING').upper()
+logging.basicConfig(level=LOGLEVEL)
+
+# The first argument to the script is the directory where the documentation is
+# stored.
+docs_directory = os.path.expanduser(sys.argv[1])
+content_directory = os.path.join(docs_directory, 'content')
+
+for root, subdirs, files in os.walk(docs_directory):
+ for filename in files:
+ # We only want to modify markdown files.
+ if filename.endswith('.md'):
+ file_path = os.path.join(root, filename)
+
+ new_file_content = []
+
+ with open(file_path, 'r', encoding='utf-8') as f:
+ for line in f:
+ # If the line contains the image tag, we need to replace it
+ if re.search(re.compile("^!\[(.*?)\]\((.*?)\)"), line):
+ logging.debug(
+ f'file {filename} (full path: {file_path})')
+ logging.debug(f"found markdown image: {line}")
+
+ line_replacement = line.replace(
+ '.replace(')', '">}}')
+
+ logging.debug(
+ f"replaced with shortcode: {line_replacement}")
+
+ new_file_content.append(line_replacement)
+
+ else:
+ new_file_content.append(line)
+
+ with open(file_path, 'w', encoding='utf-8') as f:
+ f.writelines(new_file_content)
diff --git a/hadoop-hdds/docs/themes/ozonedoc/layouts/shortcodes/image.html
b/hadoop-hdds/docs/themes/ozonedoc/layouts/shortcodes/image.html
index 1f558d9..2d143e7 100644
--- a/hadoop-hdds/docs/themes/ozonedoc/layouts/shortcodes/image.html
+++ b/hadoop-hdds/docs/themes/ozonedoc/layouts/shortcodes/image.html
@@ -16,4 +16,4 @@
-->
<!-- shortcode to easily scale images according to page width-->
-<img src='{{ .Get "src" }}' class="img-responsive"/>
\ No newline at end of file
+<img src='{{ .Get "src" }}' alt='{{ .Get "alt" }}' class="img-responsive"/>
\ No newline at end of file
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]