This is an automated email from the ASF dual-hosted git repository.
siyao pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/ozone.git
The following commit(s) were added to refs/heads/master by this push:
new 516bc9659b HDDS-13148. [Docs] Update Transparent Data Encryption doc.
(#8530)
516bc9659b is described below
commit 516bc9659b2d4d2567c7570e43d0eb012b666f5a
Author: Wei-Chiu Chuang <[email protected]>
AuthorDate: Tue Jun 10 18:50:54 2025 -0700
HDDS-13148. [Docs] Update Transparent Data Encryption doc. (#8530)
Co-authored-by: gemini-code-assist[bot]
<176961590+gemini-code-assist[bot]@users.noreply.github.com>
---
hadoop-hdds/docs/content/security/SecuringTDE.md | 150 ++++++++++++++---------
1 file changed, 92 insertions(+), 58 deletions(-)
diff --git a/hadoop-hdds/docs/content/security/SecuringTDE.md
b/hadoop-hdds/docs/content/security/SecuringTDE.md
index 0d04a28aec..14af765418 100644
--- a/hadoop-hdds/docs/content/security/SecuringTDE.md
+++ b/hadoop-hdds/docs/content/security/SecuringTDE.md
@@ -25,64 +25,81 @@ icon: lock
limitations under the License.
-->
-Ozone TDE setup process and usage are very similar to HDFS TDE.
-The major difference is that Ozone TDE is enabled at Ozone bucket level
-when a bucket is created.
+Ozone Transparent Data Encryption (TDE) enables you to encrypt data at rest.
TDE is enabled at the bucket level when a bucket is created. To use TDE, an
administrator must first configure a Key Management Server (KMS). Ozone can
work with **Hadoop KMS** and **Ranger KMS**. The KMS URI needs to be provided
to Ozone via the `core-site.xml` configuration file.
-### Setting up the Key Management Server
+Once the KMS is configured, users can create an encryption key and then create
an encrypted bucket using that key. All data written to an encrypted bucket
will be transparently encrypted on the server-side, and data read from the
bucket will be transparently decrypted.
-To use TDE, admin must setup a Key Management Server and provide that URI to
-Ozone/HDFS. Since Ozone and HDFS can use the same Key Management Server, this
- configuration can be provided via *core-site.xml*.
+### Configuring TDE
-Property| Value
------------------------------------|-----------------------------------------
-hadoop.security.key.provider.path | KMS uri. <br> e.g.
kms://http@kms-host:9600/kms
+1. **Set up a Key Management Server (KMS):**
+ * **Hadoop KMS:** Follow the instructions in the [Hadoop KMS
documentation](https://hadoop.apache.org/docs/r3.4.1/hadoop-kms/index.html).
+ * **Ranger KMS:** Ranger KMS can also be used. For Ranger KMS, encryption
keys can be managed via the Ranger KMS management console or its [REST
API](https://ranger.apache.org/kms/apidocs/index.html), in addition to the
`hadoop key` command line interface.
-### Using Transparent Data Encryption
-If this is already configured for your cluster, then you can simply proceed
-to create the encryption key and enable encrypted buckets.
+2. **Configure Ozone:**
+ Add the following property to Ozone’s `core-site.xml`:
-To create an encrypted bucket, client need to:
+ <property>
+ <name>hadoop.security.key.provider.path</name>
+ <value>kms://http@kms-host:9600/kms</value>
+ </property>
- * Create a bucket encryption key with hadoop key CLI, which is similar to
- how you would use HDFS encryption zones.
+ Replace `kms://http@kms-host:9600/kms` with the actual URI of your KMS.
For example, `kms://[email protected]:9600/kms`
- ```bash
- hadoop key create enckey
- ```
- The above command creates an encryption key for the bucket you want to
protect.
- Once the key is created, you can tell Ozone to use that key when you are
- reading and writing data into a bucket.
+### Creating an Encryption Key
- * Assign the encryption key to a bucket.
+Use the `hadoop key create` command to create an encryption key in the
configured KMS:
- ```bash
- ozone sh bucket create -k enckey /vol/encryptedbucket
- ```
+```shell
+ hadoop key create <key_name> [-size <key_bit_length>] [-cipher
<cipher_suite>] [-description <description>]
+```
+
+* `<key_name>`: The name of the encryption key.
+* **`-size <key_bit_length>` (Optional):** Specifies the key bit length. The
default is 128 bits (defined by `hadoop.security.key.default.bitlength`).
+Ranger KMS supports both 128 and 256 bits. Hadoop KMS is also commonly used
with 128 and 256 bit keys; for specific version capabilities, consult the
Hadoop KMS documentation. Valid AES key lengths are 128, 192, and 256 bits.
+* **`-cipher <cipher_suite>` (Optional):** Specifies the cipher suite.
Currently, only **`AES/CTR/NoPadding`** (the default) is supported.
+* `-description <description>` (Optional): A description for the key.
+
+For example:
+
+```shell
+ hadoop key create enckey -size 256 -cipher AES/CTR/NoPadding -description
"Encryption key for my_bucket"
+```
+
+### Creating an Encrypted Bucket
-After this command, all data written to the _encryptedbucket_ will be encrypted
-via the enckey and while reading the clients will talk to Key Management
-Server and read the key and decrypt it. In other words, the data stored
-inside Ozone is always encrypted. The fact that data is encrypted at rest
-will be completely transparent to the clients and end users.
+Use the Ozone shell `ozone sh bucket create` command with the `-k` (or
`--key`) option to specify the encryption key:
+
+```shell
+ ozone sh bucket create --key <key_name> /<volume_name>/<bucket_name>
+```
+
+For example:
+
+```shell
+ ozone sh bucket create --key enckey /vol1/encrypted_bucket
+```
+
+Now, all data written to `/vol1/encrypted_bucket` will be encrypted at rest.
As long as the client is configured correctly to use the key, such encryption
is completely transparent to the end users.
### Using Transparent Data Encryption from S3G
-There are two ways to create an encrypted bucket that can be accessed via S3
Gateway.
+Ozone’s S3 Gateway (S3G) allows you to access encrypted buckets. However, it's
important to note that **Ozone does not support S3-SSE (Server-Side Encryption)
or S3-CSE (Client-Side Encryption) in the way AWS S3 does.** That said, Ozone
S3 buckets can be encrypted using Ranger KMS or Hadoop KMS to provide a
guarantee similar to S3-SSE with client-supplied keys (SSE-C).
-#### Option 1. Create a bucket using shell under "/s3v" volume
+When creating an encrypted bucket that will be accessed via S3G:
- ```bash
- ozone sh bucket create -k enckey --layout=OBJECT_STORE /s3v/encryptedbucket
- ```
+1. **Create the bucket under the `/s3v` volume:**
+ The `/s3v` volume is the default volume for S3 buckets.
-#### Option 2. Create a link to an encrypted bucket under "/s3v" volume
+```shell
+ ozone sh bucket create --key <key_name> /s3v/<bucket_name>
--layout=OBJECT_STORE
+```
- ```bash
- ozone sh bucket create -k enckey --layout=OBJECT_STORE /vol/encryptedbucket
- ozone sh bucket link /vol/encryptedbucket /s3v/linkencryptedbucket
- ```
+2. **Alternatively, create an encrypted bucket elsewhere and link it:**
+
+```shell
+ ozone sh bucket create --key <key_name> /<volume_name>/<bucket_name>
--layout=OBJECT_STORE
+ ozone sh bucket link /<volume_name>/<bucket_name> /s3v/<link_name>
+```
Note 1: An encrypted bucket cannot be created via S3 APIs. It must be done
using Ozone shell commands as shown above.
After creating an encrypted bucket, all the keys added to this bucket using
s3g will be encrypted.
@@ -94,12 +111,12 @@ argument, but explicitly added here to make a point).
Bucket created with the `OBJECT_STORE` type will NOT be accessible via
HCFS (ofs or o3fs) at all. And such access will be rejected. For instance:
- ```bash
+```bash
$ ozone fs -ls ofs://ozone1/s3v/encryptedbucket/
-ls: Bucket: encryptedbucket has layout: OBJECT_STORE, which does not
support file system semantics. Bucket Layout must be FILE_SYSTEM_OPTIMIZED or
LEGACY.
```
- ```bash
+```bash
$ ozone fs -ls o3fs://encryptedbucket.s3v.ozone1/
22/02/07 00:00:00 WARN fs.FileSystem: Failed to initialize fileystem
o3fs://encryptedbucket.s3v.ozone1/: java.lang.IllegalArgumentException: Bucket:
encryptedbucket has layout: OBJECT_STORE, which does not support file system
semantics. Bucket Layout must be FILE_SYSTEM_OPTIMIZED or LEGACY.
-ls: Bucket: encryptedbucket has layout: OBJECT_STORE, which does not
support file system semantics. Bucket Layout must be FILE_SYSTEM_OPTIMIZED or
LEGACY.
@@ -112,37 +129,54 @@ However, in buckets with `FILE_SYSTEM_OPTIMIZED` layout,
some irregular S3 key
names may be rejected or normalized, which can be undesired.
See [Prefix based File System Optimization]({{< relref
"../feature/PrefixFSO.md" >}}) for more information.
-In non-secure mode, the user running the S3Gateway daemon process is the proxy
user,
-while in secure mode the S3Gateway Kerberos principal
(ozone.s3g.kerberos.principal) is the proxy user.
-S3Gateway proxy's all the users accessing the encrypted buckets to decrypt the
key.
-For this purpose on security enabled cluster, during S3Gateway server startup
-logins using configured
-**ozone.s3g.kerberos.keytab.file** and **ozone.s3g.kerberos.principal**.
+When accessing an S3G-enabled encrypted bucket:
-The below two configurations must be added to the kms-site.xml to allow the
S3Gateway principal to act as a proxy for other users. In this example,
"ozone.s3g.kerberos.principal" is assumed to be "s3g"
+The below three configurations must be added to the kms-site.xml to allow the
S3Gateway principal to act as a proxy for other users. In this example,
+`ozone.s3g.kerberos.principal` is assumed to be `s3g`
```xml
<property>
<name>hadoop.kms.proxyuser.s3g.users</name>
<value>user1,user2,user3</value>
<description>
- Here the value can be all the S3G accesskey ids accessing Ozone S3
- or set to '*' to allow all the accesskey ids.
+ Specifies the list of users that the S3 Gateway (`s3g`) is allowed to
impersonate when interacting with the KMS. Use `*` to allow all users.
+ </description>
+</property>
+<property>
+ <name>hadoop.kms.proxyuser.s3g.groups</name>
+ <value>group1,group2,group3</value>
+ <description>
+ Specifies the list of groups whose members `s3g` is allowed to impersonate
when making requests to the KMS. Use `*` to allow all groups.
</description>
</property>
-
<property>
<name>hadoop.kms.proxyuser.s3g.hosts</name>
<value>s3g-host1.com</value>
<description>
- This is the host where the S3Gateway is running. Set this to '*' to
allow
- requests from any hosts to be proxied.
+ Specifies the hostnames or IPs from which `s3g` is permitted to send proxy
requests to the KMS. Use `*` to allow all hosts.
</description>
</property>
```
### KMS Authorization
-If Ranger authorization is enabled for KMS, then decrypt key permission should
be given to
-access key id user(currently access key is kerberos principal) to decrypt the
encrypted key
-to read/write a key in the encrypted bucket.
+Key Management Servers (KMS) may enforce key access authorization. **Hadoop
KMS supports ACLs (Access Control Lists) for fine-grained permission control,
while Ranger KMS supports Ranger policies for encryption keys.** Ensure that
the appropriate users have the necessary permissions based on the KMS type in
use.
+
+For example, when using Ranger KMS for authorization, to allow the user `om`
(the Ozone Manager user) to access the key `enckey` and the user `hdfs` (a
typical HDFS service user) to manage keys, you might have policies in Ranger
KMS like:
+
+* **Policy for `om` user (or the user running the Ozone Manager):**
+ * Resource: `keyname=enckey`
+ * Permissions: `DECRYPT_EEK` (Decrypt Encrypted Encryption Key)
+* **Policy for S3 Gateway proxy user (e.g., the user specified in
`ozone.s3g.kerberos.principal`, typically `s3g`):**
+ * Resource: `keyname=enckey` (or specific keys for S3 buckets)
+ * Permissions: `DECRYPT_EEK`
+* **Policy for administrative users (e.g., `hdfs` or a keyadmin group):**
+ * Resource: `keyname=*` (or specific keys)
+ * Permissions: `CREATE_KEY`, `DELETE_KEY`, `GET_KEYS`, `ROLL_NEW_VERSION`
+
+Refer to the Ranger documentation for detailed instructions on configuring KMS
policies if you are using Ranger KMS. For Hadoop KMS, consult its [Hadoop KMS
documentation](https://hadoop.apache.org/docs/r3.4.1/hadoop-kms/index.html#ACLs_.28Access_Control_Lists.29)
for managing ACLs.
+
+### Additional References
+
+* For more background on Transparent Data Encryption concepts, you can refer
to the [Transparent Encryption in HDFS
documentation](https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/TransparentEncryption.html).
+* For detailed information on Hadoop KMS, see the [Hadoop KMS
documentation](https://hadoop.apache.org/docs/r3.4.1/hadoop-kms/index.html).
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]