echonesis commented on code in PR #215:
URL: https://github.com/apache/ozone-site/pull/215#discussion_r2660131187


##########
docs/04-user-guide/01-client-interfaces/02-ofs.md:
##########
@@ -1,7 +1,394 @@
 ---
 sidebar_label: ofs
 ---
+<!-- cspell:ignore Unsets snapshottable reencryption -->
 
 # ofs: Hadoop Compatible Interface
 
-**TODO:** File a subtask under 
[HDDS-9858](https://issues.apache.org/jira/browse/HDDS-9858) and complete this 
page or section.
+The Hadoop compatible file system interface allows storage backends like Ozone 
to be easily integrated into Hadoop eco-system. Ozone file system is a Hadoop 
compatible file system.
+
+:::warning
+Currently, Ozone supports two schemes: `o3fs://` and `ofs://`.
+
+The biggest difference between `o3fs` and `ofs` is that `o3fs` supports 
operations only at a **single bucket**, while `ofs` supports operations across 
all volumes and buckets and provides a full view of all the volume/buckets.
+:::
+
+## The Basics
+
+Examples of valid ofs paths:
+
+```text
+ofs://om1/
+ofs://om3:9862/
+ofs://omservice/
+ofs://omservice/volume1/
+ofs://omservice/volume1/bucket1/
+ofs://omservice/volume1/bucket1/dir1
+ofs://omservice/volume1/bucket1/dir1/key1
+
+ofs://omservice/tmp/
+ofs://omservice/tmp/key1
+```
+
+Volumes and mount(s) are located at the root level of an ofs Filesystem.
+Buckets are listed naturally under volumes.
+Keys and directories are under each buckets.
+
+Note that for mounts, only temp mount `/tmp` is supported at the moment.
+
+## Configuration
+
+Please add the following entry to the core-site.xml.

Review Comment:
   nit: we could use `core-site.xml` to highlight it as a file.



##########
docs/04-user-guide/01-client-interfaces/02-ofs.md:
##########
@@ -1,7 +1,394 @@
 ---
 sidebar_label: ofs
 ---
+<!-- cspell:ignore Unsets snapshottable reencryption -->
 
 # ofs: Hadoop Compatible Interface
 
-**TODO:** File a subtask under 
[HDDS-9858](https://issues.apache.org/jira/browse/HDDS-9858) and complete this 
page or section.
+The Hadoop compatible file system interface allows storage backends like Ozone 
to be easily integrated into Hadoop eco-system. Ozone file system is a Hadoop 
compatible file system.
+
+:::warning
+Currently, Ozone supports two schemes: `o3fs://` and `ofs://`.
+
+The biggest difference between `o3fs` and `ofs` is that `o3fs` supports 
operations only at a **single bucket**, while `ofs` supports operations across 
all volumes and buckets and provides a full view of all the volume/buckets.
+:::
+
+## The Basics
+
+Examples of valid ofs paths:
+
+```text
+ofs://om1/
+ofs://om3:9862/
+ofs://omservice/
+ofs://omservice/volume1/
+ofs://omservice/volume1/bucket1/
+ofs://omservice/volume1/bucket1/dir1
+ofs://omservice/volume1/bucket1/dir1/key1
+
+ofs://omservice/tmp/
+ofs://omservice/tmp/key1
+```
+
+Volumes and mount(s) are located at the root level of an ofs Filesystem.
+Buckets are listed naturally under volumes.
+Keys and directories are under each buckets.
+
+Note that for mounts, only temp mount `/tmp` is supported at the moment.
+
+## Configuration
+
+Please add the following entry to the core-site.xml.
+
+```xml
+<property>
+  <name>fs.ofs.impl</name>
+  <value>org.apache.hadoop.fs.ozone.RootedOzoneFileSystem</value>
+</property>
+<property>
+  <name>fs.defaultFS</name>
+  <value>ofs://om-host.example.com/</value>
+</property>
+```
+
+This will make all the volumes and buckets to be the default Hadoop compatible 
file system and register the ofs file system type.
+
+You also need to add the Ozone filesystem JAR file to the classpath:
+
+```bash
+export 
HADOOP_CLASSPATH=/opt/ozone/share/ozone/lib/ozone-filesystem-hadoop3-*.jar:$HADOOP_CLASSPATH
+```
+
+(Note: with Hadoop 2.x, use the Hadoop 2.x version)
+
+Once the default Filesystem has been setup, users can run commands like ls, 
put, mkdir, etc.
+For example:
+
+```bash
+hdfs dfs -ls /
+```
+
+Note that ofs works on all buckets and volumes. Users can create buckets and 
volumes using mkdir, such as create volume named volume1 and bucket named 
bucket1:
+
+```bash
+hdfs dfs -mkdir /volume1
+hdfs dfs -mkdir /volume1/bucket1
+```
+
+Or use the put command to write a file to the bucket.
+
+```bash
+hdfs dfs -put /etc/hosts /volume1/bucket1/test
+```
+
+For more usage, see: 
https://issues.apache.org/jira/secure/attachment/12987636/Design%20ofs%20v1.pdf
+
+## Differences from o3fs
+
+<!-- TODO: Link to o3fs documentation when created -->
+
+### Creating files
+
+ofs doesn't allow creating keys(files) directly under root or volumes.
+Users will receive an error message when they try to do that:
+
+```bash
+ozone fs -touch /volume1/key1
+# Output: touch: Cannot create file under root or volume.
+```
+
+### Simplify fs.defaultFS
+
+With ofs, fs.defaultFS (in core-site.xml) no longer needs to have a specific 
volume and bucket in its path like o3fs did.
+Simply put the OM host or service ID (in case of HA):
+
+```xml
+<property>
+  <name>fs.defaultFS</name>
+  <value>ofs://omservice</value>
+</property>
+```
+
+The client would then be able to access every volume and bucket on the cluster 
without specifying the hostname or service ID.
+
+```bash
+ozone fs -mkdir -p /volume1/bucket1
+```
+
+### Volume and bucket management directly from FileSystem shell
+
+Admins can create and delete volumes and buckets easily with Hadoop FS shell.
+Volumes and buckets are treated similar to directories so they will be created 
if they don't exist with `-p`:
+
+```bash
+ozone fs -mkdir -p ofs://omservice/volume1/bucket1/dir1/
+```
+
+Note that the supported volume and bucket name character set rule still 
applies.
+For instance, bucket and volume names don't take underscore(`_`):
+
+```bash
+$ ozone fs -mkdir -p /volume_1
+mkdir: Bucket or Volume name has an unsupported character : _
+```
+
+## Mounts and Configuring /tmp
+
+In order to be compatible with legacy Hadoop applications that use /tmp/, we 
have a special temp mount located at the root of the FS.
+This feature may be expanded in the feature to support custom mount paths.
+
+Currently Ozone supports two configurations for /tmp. The first (default), is 
a tmp directory for each user comprised of a mount volume with a user specific 
temp bucket. The second (configurable through Ozone-site.xml), a sticky-bit 
like tmp directory common to all users comprised of a mount volume and a common 
temp bucket.

Review Comment:
   Maybe we should use `ozone-site.xml` to highlight it in order to let users 
know it's a configurable file.



##########
docs/04-user-guide/01-client-interfaces/02-ofs.md:
##########
@@ -1,7 +1,394 @@
 ---
 sidebar_label: ofs
 ---
+<!-- cspell:ignore Unsets snapshottable reencryption -->
 
 # ofs: Hadoop Compatible Interface
 
-**TODO:** File a subtask under 
[HDDS-9858](https://issues.apache.org/jira/browse/HDDS-9858) and complete this 
page or section.
+The Hadoop compatible file system interface allows storage backends like Ozone 
to be easily integrated into Hadoop eco-system. Ozone file system is a Hadoop 
compatible file system.
+
+:::warning
+Currently, Ozone supports two schemes: `o3fs://` and `ofs://`.
+
+The biggest difference between `o3fs` and `ofs` is that `o3fs` supports 
operations only at a **single bucket**, while `ofs` supports operations across 
all volumes and buckets and provides a full view of all the volume/buckets.
+:::
+
+## The Basics
+
+Examples of valid ofs paths:
+
+```text
+ofs://om1/
+ofs://om3:9862/
+ofs://omservice/
+ofs://omservice/volume1/
+ofs://omservice/volume1/bucket1/
+ofs://omservice/volume1/bucket1/dir1
+ofs://omservice/volume1/bucket1/dir1/key1
+
+ofs://omservice/tmp/
+ofs://omservice/tmp/key1
+```
+
+Volumes and mount(s) are located at the root level of an ofs Filesystem.
+Buckets are listed naturally under volumes.
+Keys and directories are under each buckets.
+
+Note that for mounts, only temp mount `/tmp` is supported at the moment.
+
+## Configuration
+
+Please add the following entry to the core-site.xml.
+
+```xml
+<property>
+  <name>fs.ofs.impl</name>
+  <value>org.apache.hadoop.fs.ozone.RootedOzoneFileSystem</value>
+</property>
+<property>
+  <name>fs.defaultFS</name>
+  <value>ofs://om-host.example.com/</value>
+</property>
+```
+
+This will make all the volumes and buckets to be the default Hadoop compatible 
file system and register the ofs file system type.
+
+You also need to add the Ozone filesystem JAR file to the classpath:
+
+```bash
+export 
HADOOP_CLASSPATH=/opt/ozone/share/ozone/lib/ozone-filesystem-hadoop3-*.jar:$HADOOP_CLASSPATH
+```
+
+(Note: with Hadoop 2.x, use the Hadoop 2.x version)
+
+Once the default Filesystem has been setup, users can run commands like ls, 
put, mkdir, etc.
+For example:
+
+```bash
+hdfs dfs -ls /
+```
+
+Note that ofs works on all buckets and volumes. Users can create buckets and 
volumes using mkdir, such as create volume named volume1 and bucket named 
bucket1:
+
+```bash
+hdfs dfs -mkdir /volume1
+hdfs dfs -mkdir /volume1/bucket1
+```
+
+Or use the put command to write a file to the bucket.
+
+```bash
+hdfs dfs -put /etc/hosts /volume1/bucket1/test
+```
+
+For more usage, see: 
https://issues.apache.org/jira/secure/attachment/12987636/Design%20ofs%20v1.pdf
+
+## Differences from o3fs
+
+<!-- TODO: Link to o3fs documentation when created -->
+
+### Creating files
+
+ofs doesn't allow creating keys(files) directly under root or volumes.
+Users will receive an error message when they try to do that:
+
+```bash
+ozone fs -touch /volume1/key1
+# Output: touch: Cannot create file under root or volume.
+```
+
+### Simplify fs.defaultFS
+
+With ofs, fs.defaultFS (in core-site.xml) no longer needs to have a specific 
volume and bucket in its path like o3fs did.
+Simply put the OM host or service ID (in case of HA):
+
+```xml
+<property>
+  <name>fs.defaultFS</name>
+  <value>ofs://omservice</value>
+</property>
+```
+
+The client would then be able to access every volume and bucket on the cluster 
without specifying the hostname or service ID.
+
+```bash
+ozone fs -mkdir -p /volume1/bucket1
+```
+
+### Volume and bucket management directly from FileSystem shell
+
+Admins can create and delete volumes and buckets easily with Hadoop FS shell.
+Volumes and buckets are treated similar to directories so they will be created 
if they don't exist with `-p`:
+
+```bash
+ozone fs -mkdir -p ofs://omservice/volume1/bucket1/dir1/
+```
+
+Note that the supported volume and bucket name character set rule still 
applies.
+For instance, bucket and volume names don't take underscore(`_`):
+
+```bash
+$ ozone fs -mkdir -p /volume_1
+mkdir: Bucket or Volume name has an unsupported character : _
+```
+
+## Mounts and Configuring /tmp
+
+In order to be compatible with legacy Hadoop applications that use /tmp/, we 
have a special temp mount located at the root of the FS.
+This feature may be expanded in the feature to support custom mount paths.
+
+Currently Ozone supports two configurations for /tmp. The first (default), is 
a tmp directory for each user comprised of a mount volume with a user specific 
temp bucket. The second (configurable through Ozone-site.xml), a sticky-bit 
like tmp directory common to all users comprised of a mount volume and a common 
temp bucket.
+
+Important: To use it, first, an **admin** needs to create the volume tmp (the 
volume name is hardcoded for now) and set its ACL to world ALL access.
+Namely:
+
+```bash
+ozone sh volume create tmp
+ozone sh volume setacl tmp -al world::a
+```
+
+These commands only need to be done **once per cluster**.
+
+### For /tmp directory per user (default)
+
+Then, **each user** needs to mkdir first to initialize their own temp bucket 
once.
+
+```bash
+$ ozone fs -mkdir /tmp
+2020-06-04 00:00:00,050 [main] INFO rpc.RpcClient: Creating Bucket: tmp/0238 
...
+```
+
+After that they can write to it just like they would do to a regular 
directory. e.g.:
+
+```bash
+ozone fs -touch /tmp/key1
+```
+
+### For a sharable /tmp directory common to all users
+
+To enable the sticky-bit common /tmp directory, update the Ozone-site.xml with 
the following property

Review Comment:
   Same here, `ozone-site.xml`.



##########
docs/04-user-guide/01-client-interfaces/02-ofs.md:
##########
@@ -1,7 +1,394 @@
 ---
 sidebar_label: ofs
 ---
+<!-- cspell:ignore Unsets snapshottable reencryption -->
 
 # ofs: Hadoop Compatible Interface
 
-**TODO:** File a subtask under 
[HDDS-9858](https://issues.apache.org/jira/browse/HDDS-9858) and complete this 
page or section.
+The Hadoop compatible file system interface allows storage backends like Ozone 
to be easily integrated into Hadoop eco-system. Ozone file system is a Hadoop 
compatible file system.
+
+:::warning
+Currently, Ozone supports two schemes: `o3fs://` and `ofs://`.
+
+The biggest difference between `o3fs` and `ofs` is that `o3fs` supports 
operations only at a **single bucket**, while `ofs` supports operations across 
all volumes and buckets and provides a full view of all the volume/buckets.
+:::
+
+## The Basics
+
+Examples of valid ofs paths:
+
+```text
+ofs://om1/
+ofs://om3:9862/
+ofs://omservice/
+ofs://omservice/volume1/
+ofs://omservice/volume1/bucket1/
+ofs://omservice/volume1/bucket1/dir1
+ofs://omservice/volume1/bucket1/dir1/key1
+
+ofs://omservice/tmp/
+ofs://omservice/tmp/key1
+```
+
+Volumes and mount(s) are located at the root level of an ofs Filesystem.
+Buckets are listed naturally under volumes.
+Keys and directories are under each buckets.
+
+Note that for mounts, only temp mount `/tmp` is supported at the moment.
+
+## Configuration
+
+Please add the following entry to the core-site.xml.
+
+```xml
+<property>
+  <name>fs.ofs.impl</name>
+  <value>org.apache.hadoop.fs.ozone.RootedOzoneFileSystem</value>
+</property>
+<property>
+  <name>fs.defaultFS</name>
+  <value>ofs://om-host.example.com/</value>
+</property>
+```
+
+This will make all the volumes and buckets to be the default Hadoop compatible 
file system and register the ofs file system type.
+
+You also need to add the Ozone filesystem JAR file to the classpath:
+
+```bash
+export 
HADOOP_CLASSPATH=/opt/ozone/share/ozone/lib/ozone-filesystem-hadoop3-*.jar:$HADOOP_CLASSPATH
+```
+
+(Note: with Hadoop 2.x, use the Hadoop 2.x version)
+
+Once the default Filesystem has been setup, users can run commands like ls, 
put, mkdir, etc.
+For example:
+
+```bash
+hdfs dfs -ls /
+```
+
+Note that ofs works on all buckets and volumes. Users can create buckets and 
volumes using mkdir, such as create volume named volume1 and bucket named 
bucket1:
+
+```bash
+hdfs dfs -mkdir /volume1
+hdfs dfs -mkdir /volume1/bucket1
+```
+
+Or use the put command to write a file to the bucket.
+
+```bash
+hdfs dfs -put /etc/hosts /volume1/bucket1/test
+```
+
+For more usage, see: 
https://issues.apache.org/jira/secure/attachment/12987636/Design%20ofs%20v1.pdf
+
+## Differences from o3fs
+
+<!-- TODO: Link to o3fs documentation when created -->
+
+### Creating files
+
+ofs doesn't allow creating keys(files) directly under root or volumes.
+Users will receive an error message when they try to do that:
+
+```bash
+ozone fs -touch /volume1/key1
+# Output: touch: Cannot create file under root or volume.
+```
+
+### Simplify fs.defaultFS
+
+With ofs, fs.defaultFS (in core-site.xml) no longer needs to have a specific 
volume and bucket in its path like o3fs did.
+Simply put the OM host or service ID (in case of HA):
+
+```xml
+<property>
+  <name>fs.defaultFS</name>
+  <value>ofs://omservice</value>
+</property>
+```
+
+The client would then be able to access every volume and bucket on the cluster 
without specifying the hostname or service ID.
+
+```bash
+ozone fs -mkdir -p /volume1/bucket1
+```
+
+### Volume and bucket management directly from FileSystem shell
+
+Admins can create and delete volumes and buckets easily with Hadoop FS shell.
+Volumes and buckets are treated similar to directories so they will be created 
if they don't exist with `-p`:
+
+```bash
+ozone fs -mkdir -p ofs://omservice/volume1/bucket1/dir1/
+```
+
+Note that the supported volume and bucket name character set rule still 
applies.
+For instance, bucket and volume names don't take underscore(`_`):
+
+```bash
+$ ozone fs -mkdir -p /volume_1
+mkdir: Bucket or Volume name has an unsupported character : _
+```
+
+## Mounts and Configuring /tmp
+
+In order to be compatible with legacy Hadoop applications that use /tmp/, we 
have a special temp mount located at the root of the FS.
+This feature may be expanded in the feature to support custom mount paths.
+
+Currently Ozone supports two configurations for /tmp. The first (default), is 
a tmp directory for each user comprised of a mount volume with a user specific 
temp bucket. The second (configurable through Ozone-site.xml), a sticky-bit 
like tmp directory common to all users comprised of a mount volume and a common 
temp bucket.
+
+Important: To use it, first, an **admin** needs to create the volume tmp (the 
volume name is hardcoded for now) and set its ACL to world ALL access.
+Namely:
+
+```bash
+ozone sh volume create tmp
+ozone sh volume setacl tmp -al world::a
+```
+
+These commands only need to be done **once per cluster**.
+
+### For /tmp directory per user (default)
+
+Then, **each user** needs to mkdir first to initialize their own temp bucket 
once.
+
+```bash
+$ ozone fs -mkdir /tmp
+2020-06-04 00:00:00,050 [main] INFO rpc.RpcClient: Creating Bucket: tmp/0238 
...
+```
+
+After that they can write to it just like they would do to a regular 
directory. e.g.:
+
+```bash
+ozone fs -touch /tmp/key1
+```
+
+### For a sharable /tmp directory common to all users
+
+To enable the sticky-bit common /tmp directory, update the Ozone-site.xml with 
the following property
+
+```xml
+<property>
+  <name>ozone.om.enable.ofs.shared.tmp.dir</name>
+  <value>true</value>
+</property>
+```
+
+Then after setting up the volume tmp as **admin**, also configure a tmp bucket 
that serves as the common /tmp directory for all users, for example,
+
+```bash
+$ ozone sh bucket create /tmp/tmp
+$ ozone sh volume setacl tmp -a user:anyuser:rwlc \
+  user:adminuser:a,group:anyuser:rwlc,group:adminuser:a tmp/tmp
+```
+
+where, anyuser is username(s) admin wants to grant access and, adminuser is 
the admin username.
+
+Users then access the tmp directory as,
+
+```bash
+ozone fs -put ./NOTICE.txt ofs://om/tmp/key1
+```
+
+## Delete with trash enabled
+
+In order to enable trash in Ozone, Please add these configs to core-site.xml

Review Comment:
   nit: it's better for junior users to know `core-site.xml` is a configurable 
file.



##########
docs/04-user-guide/01-client-interfaces/02-ofs.md:
##########
@@ -1,7 +1,394 @@
 ---
 sidebar_label: ofs
 ---
+<!-- cspell:ignore Unsets snapshottable reencryption -->
 
 # ofs: Hadoop Compatible Interface
 
-**TODO:** File a subtask under 
[HDDS-9858](https://issues.apache.org/jira/browse/HDDS-9858) and complete this 
page or section.
+The Hadoop compatible file system interface allows storage backends like Ozone 
to be easily integrated into Hadoop eco-system. Ozone file system is a Hadoop 
compatible file system.
+
+:::warning
+Currently, Ozone supports two schemes: `o3fs://` and `ofs://`.
+
+The biggest difference between `o3fs` and `ofs` is that `o3fs` supports 
operations only at a **single bucket**, while `ofs` supports operations across 
all volumes and buckets and provides a full view of all the volume/buckets.
+:::
+
+## The Basics
+
+Examples of valid ofs paths:
+
+```text
+ofs://om1/
+ofs://om3:9862/
+ofs://omservice/
+ofs://omservice/volume1/
+ofs://omservice/volume1/bucket1/
+ofs://omservice/volume1/bucket1/dir1
+ofs://omservice/volume1/bucket1/dir1/key1
+
+ofs://omservice/tmp/
+ofs://omservice/tmp/key1
+```
+
+Volumes and mount(s) are located at the root level of an ofs Filesystem.
+Buckets are listed naturally under volumes.
+Keys and directories are under each buckets.
+
+Note that for mounts, only temp mount `/tmp` is supported at the moment.
+
+## Configuration
+
+Please add the following entry to the core-site.xml.
+
+```xml
+<property>
+  <name>fs.ofs.impl</name>
+  <value>org.apache.hadoop.fs.ozone.RootedOzoneFileSystem</value>
+</property>
+<property>
+  <name>fs.defaultFS</name>
+  <value>ofs://om-host.example.com/</value>
+</property>
+```
+
+This will make all the volumes and buckets to be the default Hadoop compatible 
file system and register the ofs file system type.
+
+You also need to add the Ozone filesystem JAR file to the classpath:
+
+```bash
+export 
HADOOP_CLASSPATH=/opt/ozone/share/ozone/lib/ozone-filesystem-hadoop3-*.jar:$HADOOP_CLASSPATH
+```
+
+(Note: with Hadoop 2.x, use the Hadoop 2.x version)

Review Comment:
   nit: we could use `:::note` admonition here.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to