jojochuang commented on code in PR #210: URL: https://github.com/apache/ozone-site/pull/210#discussion_r2670278042
########## docs/02-quick-start/02-reading-writing-data.md: ########## @@ -1,3 +1,329 @@ -# Reading and Writing Data +# Reading and Writing Data in Ozone -**TODO:** File a subtask under [HDDS-9856](https://issues.apache.org/jira/browse/HDDS-9856) and complete this page or section. +Apache Ozone provides multiple interfaces for reading and writing data, catering to different use cases and client +preferences. This guide explains how to use the three primary interfaces within a Docker environment: + +1. **Ozone Shell (`ozone sh`)** - The native command-line interface +2. **ofs (Ozone File System)** - Hadoop-compatible file system interface +3. **S3 API** - Amazon S3 compatible REST interface + +All examples assume you already have a running Ozone cluster using Docker Compose as described in +the [Docker Installation Guide](./01-installation/01-docker.md). + +## Interface Comparison + +| Interface | Strengths | Use Cases | +|:----------------|:----------------------------------------------------------------------------------------|:---------------------------------------------------------------------------| +| **Ozone Shell** | - Full feature access Advanced operations Detailed metadata | - Administrative tasks Bucket/volume management Quota/ACL management | +| **ofs** | - Familiar HDFS-like commands Works with existing Hadoop applications Full cluster view | - Hadoop ecosystem integration Applications that need filesystem semantics | +| **S3 API** | - Industry standard Works with existing S3 clients Language-independent | - Web applications Multi-language environments Existing S3 applications | + +## Using Ozone Shell (ozone sh) + +The Ozone Shell provides direct access to all Ozone features through a command-line interface. All commands follow the +pattern: + +```bash +ozone sh <object-type> <action> <path> [options] +``` + +Where `<object-type>` is `volume`, `bucket`, or `key`. + +### Accessing the Ozone Shell + +To use the Ozone Shell in your Docker environment, execute commands inside the `om` or `ozone-client` container: + +```bash +# Example for Docker Compose +docker compose exec om bash +# or +docker compose exec ozone-client bash + +# Now you can run 'ozone sh' commands +``` + +### Working with Volumes + +Volumes are the top-level namespace in Ozone. + +```bash +# Create a volume +ozone sh volume create /vol1 + +# List all volumes +ozone sh volume list / + +# Get volume details +ozone sh volume info /vol1 + +# Delete a volume (must be empty) +ozone sh volume delete /vol1 + +# Delete a volume recursively (deletes all buckets and keys within the volume, then the volume itself) +# WARNING: No recovery option after using this command, and no trash for FSO buckets. Requires confirmation. +ozone sh volume delete -r /vol1 +``` + +### Working with Buckets + +Buckets are containers for keys (objects) within volumes. + +```bash +# Create a bucket +ozone sh bucket create /vol1/bucket1 + +# List all buckets in a volume +ozone sh bucket list /vol1 + +# Get bucket details +ozone sh bucket info /vol1/bucket1 + +# Delete a bucket (must be empty) +ozone sh bucket delete /vol1/bucket1 + +# Delete a bucket recursively (deletes all keys within the bucket, then the bucket itself) +# WARNING: No recovery option, deleted keys won't move to trash. Requires confirmation. +ozone sh bucket delete -r /vol1/bucket1 +``` + +### Working with Keys (Objects) + +Keys are the actual data objects stored in Ozone. + +```bash +# Create a test file locally +echo "Hello Ozone via Shell" > test_shell.txt + +# Upload a file (put source to destination) +ozone sh key put /vol1/bucket1/test_shell.txt test_shell.txt Review Comment: this command assumes the bucket bucket1 exists, which is deleted in the previous section. ########## docs/02-quick-start/02-reading-writing-data.md: ########## @@ -1,3 +1,329 @@ -# Reading and Writing Data +# Reading and Writing Data in Ozone -**TODO:** File a subtask under [HDDS-9856](https://issues.apache.org/jira/browse/HDDS-9856) and complete this page or section. +Apache Ozone provides multiple interfaces for reading and writing data, catering to different use cases and client +preferences. This guide explains how to use the three primary interfaces within a Docker environment: + +1. **Ozone Shell (`ozone sh`)** - The native command-line interface +2. **ofs (Ozone File System)** - Hadoop-compatible file system interface +3. **S3 API** - Amazon S3 compatible REST interface + +All examples assume you already have a running Ozone cluster using Docker Compose as described in +the [Docker Installation Guide](./01-installation/01-docker.md). + +## Interface Comparison + +| Interface | Strengths | Use Cases | +|:----------------|:----------------------------------------------------------------------------------------|:---------------------------------------------------------------------------| +| **Ozone Shell** | - Full feature access Advanced operations Detailed metadata | - Administrative tasks Bucket/volume management Quota/ACL management | +| **ofs** | - Familiar HDFS-like commands Works with existing Hadoop applications Full cluster view | - Hadoop ecosystem integration Applications that need filesystem semantics | +| **S3 API** | - Industry standard Works with existing S3 clients Language-independent | - Web applications Multi-language environments Existing S3 applications | + +## Using Ozone Shell (ozone sh) + +The Ozone Shell provides direct access to all Ozone features through a command-line interface. All commands follow the +pattern: + +```bash +ozone sh <object-type> <action> <path> [options] +``` + +Where `<object-type>` is `volume`, `bucket`, or `key`. + +### Accessing the Ozone Shell + +To use the Ozone Shell in your Docker environment, execute commands inside the `om` or `ozone-client` container: + +```bash +# Example for Docker Compose +docker compose exec om bash +# or +docker compose exec ozone-client bash + +# Now you can run 'ozone sh' commands +``` + +### Working with Volumes + +Volumes are the top-level namespace in Ozone. + +```bash +# Create a volume +ozone sh volume create /vol1 + +# List all volumes +ozone sh volume list / + +# Get volume details +ozone sh volume info /vol1 + +# Delete a volume (must be empty) +ozone sh volume delete /vol1 + +# Delete a volume recursively (deletes all buckets and keys within the volume, then the volume itself) +# WARNING: No recovery option after using this command, and no trash for FSO buckets. Requires confirmation. +ozone sh volume delete -r /vol1 +``` + +### Working with Buckets + +Buckets are containers for keys (objects) within volumes. + +```bash +# Create a bucket +ozone sh bucket create /vol1/bucket1 Review Comment: this command assumes /vol1 exists, which is deleted in the previous section. ########## docs/02-quick-start/02-reading-writing-data.md: ########## @@ -1,3 +1,329 @@ -# Reading and Writing Data +# Reading and Writing Data in Ozone -**TODO:** File a subtask under [HDDS-9856](https://issues.apache.org/jira/browse/HDDS-9856) and complete this page or section. +Apache Ozone provides multiple interfaces for reading and writing data, catering to different use cases and client +preferences. This guide explains how to use the three primary interfaces within a Docker environment: + +1. **Ozone Shell (`ozone sh`)** - The native command-line interface +2. **ofs (Ozone File System)** - Hadoop-compatible file system interface +3. **S3 API** - Amazon S3 compatible REST interface + +All examples assume you already have a running Ozone cluster using Docker Compose as described in +the [Docker Installation Guide](./01-installation/01-docker.md). + +## Interface Comparison + +| Interface | Strengths | Use Cases | +|:----------------|:----------------------------------------------------------------------------------------|:---------------------------------------------------------------------------| +| **Ozone Shell** | - Full feature access Advanced operations Detailed metadata | - Administrative tasks Bucket/volume management Quota/ACL management | +| **ofs** | - Familiar HDFS-like commands Works with existing Hadoop applications Full cluster view | - Hadoop ecosystem integration Applications that need filesystem semantics | +| **S3 API** | - Industry standard Works with existing S3 clients Language-independent | - Web applications Multi-language environments Existing S3 applications | + +## Using Ozone Shell (ozone sh) + +The Ozone Shell provides direct access to all Ozone features through a command-line interface. All commands follow the +pattern: + +```bash +ozone sh <object-type> <action> <path> [options] +``` + +Where `<object-type>` is `volume`, `bucket`, or `key`. + +### Accessing the Ozone Shell + +To use the Ozone Shell in your Docker environment, execute commands inside the `om` or `ozone-client` container: + +```bash +# Example for Docker Compose +docker compose exec om bash +# or +docker compose exec ozone-client bash + +# Now you can run 'ozone sh' commands +``` + +### Working with Volumes + +Volumes are the top-level namespace in Ozone. + +```bash +# Create a volume +ozone sh volume create /vol1 + +# List all volumes +ozone sh volume list / + +# Get volume details +ozone sh volume info /vol1 + +# Delete a volume (must be empty) +ozone sh volume delete /vol1 + +# Delete a volume recursively (deletes all buckets and keys within the volume, then the volume itself) +# WARNING: No recovery option after using this command, and no trash for FSO buckets. Requires confirmation. +ozone sh volume delete -r /vol1 +``` + +### Working with Buckets + +Buckets are containers for keys (objects) within volumes. + +```bash +# Create a bucket +ozone sh bucket create /vol1/bucket1 + +# List all buckets in a volume +ozone sh bucket list /vol1 + +# Get bucket details +ozone sh bucket info /vol1/bucket1 + +# Delete a bucket (must be empty) +ozone sh bucket delete /vol1/bucket1 + +# Delete a bucket recursively (deletes all keys within the bucket, then the bucket itself) +# WARNING: No recovery option, deleted keys won't move to trash. Requires confirmation. +ozone sh bucket delete -r /vol1/bucket1 +``` + +### Working with Keys (Objects) + +Keys are the actual data objects stored in Ozone. + +```bash +# Create a test file locally +echo "Hello Ozone via Shell" > test_shell.txt + +# Upload a file (put source to destination) +ozone sh key put /vol1/bucket1/test_shell.txt test_shell.txt + +# Upload with specific replication type +# For RATIS: use -r ONE or THREE +ozone sh key put -t RATIS -r THREE /vol1/bucket1/key1_ratis test_shell.txt +# For EC: use format CODEC-DATA-PARITY-CHUNKSIZE (e.g., rs-3-2-1024k, rs-6-3-1024k, rs-10-4-1024k) +ozone sh key put -t EC -r rs-3-2-1024k /vol1/bucket1/key1_ec test_shell.txt + +# Download a file (get source to destination) +ozone sh key get /vol1/bucket1/test_shell.txt ./downloaded_shell.txt + +# Force overwrite when downloading (use -f or --force) +ozone sh key get --force /vol1/bucket1/test_shell.txt ./downloaded_shell.txt + +# Get key information +ozone sh key info /vol1/bucket1/test_shell.txt + +# List keys in a bucket +ozone sh key list /vol1/bucket1 + +# Copy a key within Ozone (not directly supported, use put/get or other interfaces) + +# Rename a key +ozone sh key rename /vol1/bucket1 test_shell.txt renamed_shell.txt + +# Delete a key +ozone sh key delete /vol1/bucket1/test_shell.txt + +# Note: In FSO buckets, deleted keys are moved to trash at /<volume>/<bucket>/.Trash/<user> +# In OBS buckets, deletion is permanent. +``` + +## Using ofs (Ozone File System) + + ofs provides a Hadoop-compatible file system interface (`ofs://`), making it seamless to use with applications designed +for HDFS. + +### Accessing ofs + +You can use `ozone fs` commands (a wrapper around `hdfs dfs`) inside the `om` or `ozone-client` container: + +```bash +# Inside the OM or ozone-client container +docker compose exec om bash +# or +docker compose exec ozone-client bash +``` + +### Basic ofs Operations + +ofs uses standard Hadoop filesystem commands. + +```bash +# Create volume and bucket (using filesystem semantics) +ozone fs -mkdir -p /vol1/bucket_ofs Review Comment: these command would fail because fs.defaultFS is not ofs://om in the docker compose file. Suggest to add "ofs://om/" prefix for the commands in this section. E.g. ozone fs -mkdir -p ofs://om/vol1/bucket_ofs ########## docs/02-quick-start/02-reading-writing-data.md: ########## @@ -1,3 +1,329 @@ -# Reading and Writing Data +# Reading and Writing Data in Ozone -**TODO:** File a subtask under [HDDS-9856](https://issues.apache.org/jira/browse/HDDS-9856) and complete this page or section. +Apache Ozone provides multiple interfaces for reading and writing data, catering to different use cases and client +preferences. This guide explains how to use the three primary interfaces within a Docker environment: + +1. **Ozone Shell (`ozone sh`)** - The native command-line interface +2. **ofs (Ozone File System)** - Hadoop-compatible file system interface +3. **S3 API** - Amazon S3 compatible REST interface + +All examples assume you already have a running Ozone cluster using Docker Compose as described in +the [Docker Installation Guide](./01-installation/01-docker.md). + +## Interface Comparison + +| Interface | Strengths | Use Cases | +|:----------------|:----------------------------------------------------------------------------------------|:---------------------------------------------------------------------------| +| **Ozone Shell** | - Full feature access Advanced operations Detailed metadata | - Administrative tasks Bucket/volume management Quota/ACL management | +| **ofs** | - Familiar HDFS-like commands Works with existing Hadoop applications Full cluster view | - Hadoop ecosystem integration Applications that need filesystem semantics | +| **S3 API** | - Industry standard Works with existing S3 clients Language-independent | - Web applications Multi-language environments Existing S3 applications | + +## Using Ozone Shell (ozone sh) + +The Ozone Shell provides direct access to all Ozone features through a command-line interface. All commands follow the +pattern: + +```bash +ozone sh <object-type> <action> <path> [options] +``` + +Where `<object-type>` is `volume`, `bucket`, or `key`. + +### Accessing the Ozone Shell + +To use the Ozone Shell in your Docker environment, execute commands inside the `om` or `ozone-client` container: + +```bash +# Example for Docker Compose +docker compose exec om bash +# or +docker compose exec ozone-client bash + +# Now you can run 'ozone sh' commands +``` + +### Working with Volumes + +Volumes are the top-level namespace in Ozone. + +```bash +# Create a volume +ozone sh volume create /vol1 + +# List all volumes +ozone sh volume list / + +# Get volume details +ozone sh volume info /vol1 + +# Delete a volume (must be empty) +ozone sh volume delete /vol1 + +# Delete a volume recursively (deletes all buckets and keys within the volume, then the volume itself) +# WARNING: No recovery option after using this command, and no trash for FSO buckets. Requires confirmation. +ozone sh volume delete -r /vol1 +``` + +### Working with Buckets + +Buckets are containers for keys (objects) within volumes. + +```bash +# Create a bucket +ozone sh bucket create /vol1/bucket1 + +# List all buckets in a volume +ozone sh bucket list /vol1 + +# Get bucket details +ozone sh bucket info /vol1/bucket1 + +# Delete a bucket (must be empty) +ozone sh bucket delete /vol1/bucket1 + +# Delete a bucket recursively (deletes all keys within the bucket, then the bucket itself) +# WARNING: No recovery option, deleted keys won't move to trash. Requires confirmation. +ozone sh bucket delete -r /vol1/bucket1 +``` + +### Working with Keys (Objects) + +Keys are the actual data objects stored in Ozone. + +```bash +# Create a test file locally +echo "Hello Ozone via Shell" > test_shell.txt + +# Upload a file (put source to destination) +ozone sh key put /vol1/bucket1/test_shell.txt test_shell.txt + +# Upload with specific replication type +# For RATIS: use -r ONE or THREE +ozone sh key put -t RATIS -r THREE /vol1/bucket1/key1_ratis test_shell.txt +# For EC: use format CODEC-DATA-PARITY-CHUNKSIZE (e.g., rs-3-2-1024k, rs-6-3-1024k, rs-10-4-1024k) +ozone sh key put -t EC -r rs-3-2-1024k /vol1/bucket1/key1_ec test_shell.txt Review Comment: this command requires at least 5 Datanodes. If a reader follows the quick start guide, she would start only 3 datanodes and the command would fail. Suggest to ask the reader to start 5 datanodes in the docker compose: docker compose up -d --scale datanode=5 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected] --------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]
