FANNG1 commented on code in PR #6059:
URL: https://github.com/apache/gravitino/pull/6059#discussion_r1914209090
##########
docs/hadoop-catalog-with-adls.md:
##########
@@ -28,12 +28,14 @@ Once the server is up and running, you can proceed to
configure the Hadoop catal
Apart from configurations mentioned in
[Hadoop-catalog-catalog-configuration](./hadoop-catalog.md#catalog-properties),
the following properties are required to configure a Hadoop catalog with ADLS:
-| Configuration item | Description
| Default value | Required | Since
version |
-|-----------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------|----------|------------------|
-| `filesystem-providers` | The file system providers to add. Set it
to `abs` if it's a Azure Blob Storage fileset, or a comma separated string that
contains `abs` like `oss,abs,s3` to support multiple kinds of fileset including
`abs`. | (none) | Yes |
0.8.0-incubating |
-| `default-filesystem-provider` | The name default filesystem providers of
this Hadoop catalog if users do not specify the scheme in the URI. Default
value is `builtin-local`, for Azure Blob Storage, if we set this value, we can
omit the prefix 'abfss://' in the location. | `builtin-local` | No |
0.8.0-incubating |
-| `azure-storage-account-name ` | The account name of Azure Blob Storage.
| (none) | Yes |
0.8.0-incubating |
-| `azure-storage-account-key` | The account key of Azure Blob Storage.
| (none) | Yes |
0.8.0-incubating |
+| Configuration item | Description
| Default value | Required | Since version |
+|-------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------|----------|------------------|
+| `filesystem-providers` | The file system providers to add. Set it to
`abs` if it's a Azure Blob Storage fileset, or a comma separated string that
contains `abs` like `oss,abs,s3` to support multiple kinds of fileset including
`abs`.
| (none) | Yes | 0.8.0-incubating |
+| `default-filesystem-provider` | The name default filesystem providers of
this Hadoop catalog if users do not specify the scheme in the URI. Default
value is `builtin-local`, for Azure Blob Storage, if we set this value, we can
omit the prefix 'abfss://' in the location.
| `builtin-local` | No | 0.8.0-incubating |
+| `azure-storage-account-name ` | The account name of Azure Blob Storage.
| (none) | Yes | 0.8.0-incubating |
+| `azure-storage-account-key` | The account key of Azure Blob Storage.
| (none) | Yes | 0.8.0-incubating |
+| `credential-providers` | The credential provider types, separated by
comma, possible value can be `adls-token`, `azure-account-key`. As the default
authentication type is using account name and account key as the above, this
configuration can enable credential vending provided by Gravitino server and
client will no longer need to provide authentication information like
account_name/account_key to access ADLS by GVFS. Once it's set, more
configuration items are needed to make it works, please see
[adls-credential-vending](security/credential-vending.md) | (none) |
No | 0.8.0-incubating |
Review Comment:
please link to `./security/credential-vending.md#adls-credentials`
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]