This is an automated email from the ASF dual-hosted git repository.

captainzmc pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/ozone.git


The following commit(s) were added to refs/heads/master by this push:
     new b352ad03ab HDDS-7409 [doc] Update documents for better presentation 
(#3884)
b352ad03ab is described below

commit b352ad03abd50ad8b2c74aa33173c2299afaf4c2
Author: Hongbing Wang <[email protected]>
AuthorDate: Mon Jan 2 22:17:11 2023 +0800

    HDDS-7409 [doc] Update documents for better presentation (#3884)
    
    Co-authored-by: wanghongbing <[email protected]>
---
 hadoop-hdds/docs/content/interface/Ofs.md          | 32 +++++++++++-----------
 hadoop-hdds/docs/content/interface/S3.zh.md        |  2 +-
 hadoop-hdds/docs/content/security/SecuringTDE.md   |  2 +-
 hadoop-hdds/docs/content/security/SecurityAcls.md  |  9 ++++--
 .../docs/content/security/SecurityAcls.zh.md       |  7 ++++-
 .../docs/content/security/SecurityWithRanger.md    |  2 +-
 .../docs/content/security/SecurityWithRanger.zh.md |  2 +-
 7 files changed, 33 insertions(+), 23 deletions(-)

diff --git a/hadoop-hdds/docs/content/interface/Ofs.md 
b/hadoop-hdds/docs/content/interface/Ofs.md
index 7a3f892281..0ad5339411 100644
--- a/hadoop-hdds/docs/content/interface/Ofs.md
+++ b/hadoop-hdds/docs/content/interface/Ofs.md
@@ -117,7 +117,7 @@ For more usage, see: 
https://issues.apache.org/jira/secure/attachment/12987636/D
 OFS doesn't allow creating keys(files) directly under root or volumes.
 Users will receive an error message when they try to do that:
 
-```
+```bash
 $ ozone fs -touch /volume1/key1
 touch: Cannot create file under root or volume.
 ```
@@ -128,17 +128,17 @@ With OFS, fs.defaultFS (in core-site.xml) no longer needs 
to have a specific
 volume and bucket in its path like o3fs did.
 Simply put the OM host or service ID (in case of HA):
 
-```
+```xml
 <property>
-<name>fs.defaultFS</name>
-<value>ofs://omservice</value>
+  <name>fs.defaultFS</name>
+  <value>ofs://omservice</value>
 </property>
 ```
 
 The client would then be able to access every volume and bucket on the cluster
 without specifying the hostname or service ID.
 
-```
+```bash
 $ ozone fs -mkdir -p /volume1/bucket1
 ```
 
@@ -148,14 +148,14 @@ Admins can create and delete volumes and buckets easily 
with Hadoop FS shell.
 Volumes and buckets are treated similar to directories so they will be created
 if they don't exist with `-p`:
 
-```
+```bash
 $ ozone fs -mkdir -p ofs://omservice/volume1/bucket1/dir1/
 ```
 
 Note that the supported volume and bucket name character set rule still 
applies.
 For instance, bucket and volume names don't take underscore(`_`):
 
-```
+```bash
 $ ozone fs -mkdir -p /volume_1
 mkdir: Bucket or Volume name has an unsupported character : _
 ```
@@ -170,7 +170,7 @@ Important: To use it, first, an **admin** needs to create 
the volume tmp
 (the volume name is hardcoded for now) and set its ACL to world ALL access.
 Namely:
 
-```
+```bash
 $ ozone sh volume create tmp
 $ ozone sh volume setacl tmp -al world::a
 ```
@@ -180,7 +180,7 @@ These commands only needs to be done **once per cluster**.
 Then, **each user** needs to mkdir first to initialize their own temp bucket
 once.
 
-```
+```bash
 $ ozone fs -mkdir /tmp
 2020-06-04 00:00:00,050 [main] INFO rpc.RpcClient: Creating Bucket: tmp/0238 
...
 ```
@@ -188,7 +188,7 @@ $ ozone fs -mkdir /tmp
 After that they can write to it just like they would do to a regular
 directory. e.g.:
 
-```
+```bash
 $ ozone fs -touch /tmp/key1
 ```
 
@@ -198,12 +198,12 @@ In order to enable trash in Ozone, Please add these 
configs to core-site.xml
 
 {{< highlight xml >}}
 <property>
-<name>fs.trash.interval</name>
-<value>10</value>
+  <name>fs.trash.interval</name>
+  <value>10</value>
 </property>
 <property>
-<name>fs.trash.classname</name>
-<value>org.apache.hadoop.ozone.om.TrashPolicyOzone</value>
+  <name>fs.trash.classname</name>
+  <value>org.apache.hadoop.ozone.om.TrashPolicyOzone</value>
 </property>
 {{< /highlight >}}
                                            
@@ -212,7 +212,7 @@ When keys are deleted with trash enabled, they are moved to 
a trash directory
 under each bucket, because keys aren't allowed to be moved(renamed) between
 buckets in Ozone.
 
-```
+```bash
 $ ozone fs -rm /volume1/bucket1/key1
 2020-06-04 00:00:00,100 [main] INFO fs.TrashPolicyDefault: Moved: 
'ofs://id1/volume1/bucket1/key1' to trash at: 
ofs://id1/volume1/bucket1/.Trash/hadoop/Current/volume1/bucket1/key1
 ```
@@ -230,7 +230,7 @@ This is very similar to how the HDFS encryption zone 
handles trash location.
 
 OFS supports recursive volume, bucket and key listing.
 
-i.e. `ozone fs -ls -R ofs://omservice/`` will recursively list all volumes,
+i.e. `ozone fs -ls -R ofs://omservice/` will recursively list all volumes,
 buckets and keys the user has LIST permission to if ACL is enabled.
 If ACL is disabled, the command would just list literally everything on that
 cluster.
diff --git a/hadoop-hdds/docs/content/interface/S3.zh.md 
b/hadoop-hdds/docs/content/interface/S3.zh.md
index a73e074858..8e574e5f31 100644
--- a/hadoop-hdds/docs/content/interface/S3.zh.md
+++ b/hadoop-hdds/docs/content/interface/S3.zh.md
@@ -89,7 +89,7 @@ HEAD 对象                         | 已实现      |
 
 如果不启用安全机制,你可以*使用***任何** AWS_ACCESS_KEY_ID 和 AWS_SECRET_ACCESS_KEY 来访问 Ozone 的 
S3 服务。
 
-在启用了安全机制的情况下,你可以通过 `ozone s3 gesecret` 命令获取 key 和 secret(需要进行 Kerberos 认证)。
+在启用了安全机制的情况下,你可以通过 `ozone s3 getsecret` 命令获取 key 和 secret(需要进行 Kerberos 认证)。
 
 ```bash
 kinit -kt /etc/security/keytabs/testuser.keytab testuser/[email protected]
diff --git a/hadoop-hdds/docs/content/security/SecuringTDE.md 
b/hadoop-hdds/docs/content/security/SecuringTDE.md
index 8ddedc4390..3b75bee1bf 100644
--- a/hadoop-hdds/docs/content/security/SecuringTDE.md
+++ b/hadoop-hdds/docs/content/security/SecuringTDE.md
@@ -121,7 +121,7 @@ logins using configured
 
 The below two configurations must be added to the kms-site.xml to allow the 
S3Gateway principal to act as a proxy for other users. In this example, 
"ozone.s3g.kerberos.principal" is assumed to be "s3g"
 
-```
+```xml
 <property>
   <name>hadoop.kms.proxyuser.s3g.users</name>
   <value>user1,user2,user3</value>
diff --git a/hadoop-hdds/docs/content/security/SecurityAcls.md 
b/hadoop-hdds/docs/content/security/SecurityAcls.md
index da4b28af85..0bf32f5f5a 100644
--- a/hadoop-hdds/docs/content/security/SecurityAcls.md
+++ b/hadoop-hdds/docs/content/security/SecurityAcls.md
@@ -26,8 +26,13 @@ icon: transfer
 -->
 
 Ozone supports a set of native ACLs. These ACLs can be used independently 
-of ozone ACL plugin such as Ranger. If Apache Ranger plugin for Ozone is 
-enabled, then ACL will be checked with Ranger.
+of ozone ACL plugin such as Ranger.
+Add the following properties to the ozone-site.xml to enable native ACLs.
+
+Property|Value
+--------|------------------------------------------------------------
+ozone.acl.enabled         | true
+ozone.acl.authorizer.class| 
org.apache.ranger.authorization.ozone.authorizer.OzoneNativeAuthorizer
 
 Ozone ACLs are a super set of Posix and S3 ACLs.
 
diff --git a/hadoop-hdds/docs/content/security/SecurityAcls.zh.md 
b/hadoop-hdds/docs/content/security/SecurityAcls.zh.md
index e0b0e88911..0d2661ceb9 100644
--- a/hadoop-hdds/docs/content/security/SecurityAcls.zh.md
+++ b/hadoop-hdds/docs/content/security/SecurityAcls.zh.md
@@ -25,7 +25,12 @@ icon: transfer
   limitations under the License.
 -->
 
-Ozone 既支持原生的 ACL,也支持类似 Ranger 这样的 ACL 插件,如果启用了 Ranger 插件,则以 Ranger 中的 ACL 为准。
+Ozone 既支持类似 Ranger 这样的 ACL 插件,也支持原生的 ACL。如果需要启用原生的 ACL,在 ozone-site.xml 
中添加下面的参数:
+
+Property|Value
+--------|------------------------------------------------------------
+ozone.acl.enabled         | true
+ozone.acl.authorizer.class| 
org.apache.ranger.authorization.ozone.authorizer.OzoneNativeAuthorizer
 
 Ozone 的 ACL 是 Posix ACL 和 S3 ACL 的超集。
 
diff --git a/hadoop-hdds/docs/content/security/SecurityWithRanger.md 
b/hadoop-hdds/docs/content/security/SecurityWithRanger.md
index 9428f93ec0..779183f828 100644
--- a/hadoop-hdds/docs/content/security/SecurityWithRanger.md
+++ b/hadoop-hdds/docs/content/security/SecurityWithRanger.md
@@ -47,7 +47,7 @@ ozone.acl.enabled         | true
 ozone.acl.authorizer.class| 
org.apache.ranger.authorization.ozone.authorizer.RangerOzoneAuthorizer
 
 To use the RangerOzoneAuthorizer, you also need to add the following 
environment variables to ozone-env.sh:
-```
+```bash
 export OZONE_CLASSPATH="${OZONE_HOME}/share/ozone/lib/libext/*"
 ```
 * The location of the ranger-ozone-plugin jars depends on where the Ranger 
Plugin is installed.
diff --git a/hadoop-hdds/docs/content/security/SecurityWithRanger.zh.md 
b/hadoop-hdds/docs/content/security/SecurityWithRanger.zh.md
index 9fd0d033ec..ecd1f38bd5 100644
--- a/hadoop-hdds/docs/content/security/SecurityWithRanger.zh.md
+++ b/hadoop-hdds/docs/content/security/SecurityWithRanger.zh.md
@@ -38,7 +38,7 @@ ozone.acl.enabled         | true
 ozone.acl.authorizer.class| 
org.apache.ranger.authorization.ozone.authorizer.RangerOzoneAuthorizer
 
 为了使用 RangerOzoneAuthorizer,还需要在 ozone-env.sh 中增加下面环境变量:
-```
+```bash
 export OZONE_CLASSPATH="${OZONE_HOME}/share/ozone/lib/libext/*"
 ```
 * ranger-ozone-plugin jars 具体路径取决于 Ranger Ozone plugin 安装配置。


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to