This is an automated email from the ASF dual-hosted git repository.
jshao pushed a commit to branch branch-0.7
in repository https://gitbox.apache.org/repos/asf/gravitino.git
The following commit(s) were added to refs/heads/branch-0.7 by this push:
new 419620b71 [#5081] improvement(docs): Add the document about cloud
storage fileset. (#5400)
419620b71 is described below
commit 419620b713c82ff01ffdc3e02f8d3d6b1a600fa7
Author: github-actions[bot]
<41898282+github-actions[bot]@users.noreply.github.com>
AuthorDate: Thu Oct 31 16:49:06 2024 +0800
[#5081] improvement(docs): Add the document about cloud storage fileset.
(#5400)
### What changes were proposed in this pull request?
Add a document about S3, GCS and OSS filesets.
### Why are the changes needed?
For better user experience.
Fix: #5081
### Does this PR introduce _any_ user-facing change?
N/A.
### How was this patch tested?
N/A.
Co-authored-by: Qi Yu <[email protected]>
---
docs/hadoop-catalog.md | 92 +++++++++++--
docs/how-to-use-gvfs.md | 167 ++++++++++++++++++++----
docs/iceberg-rest-service.md | 17 +--
docs/lakehouse-iceberg-catalog.md | 17 +--
docs/lakehouse-paimon-catalog.md | 33 ++---
docs/manage-fileset-metadata-using-gravitino.md | 52 ++++++++
6 files changed, 306 insertions(+), 72 deletions(-)
diff --git a/docs/hadoop-catalog.md b/docs/hadoop-catalog.md
index 4453cb317..46b22f4de 100644
--- a/docs/hadoop-catalog.md
+++ b/docs/hadoop-catalog.md
@@ -25,19 +25,85 @@ Hadoop 3. If there's any compatibility issue, please create
an [issue](https://g
Besides the [common catalog
properties](./gravitino-server-config.md#gravitino-catalog-properties-configuration),
the Hadoop catalog has the following properties:
-| Property Name | Description
| Default Value | Required
| Since Version |
-|----------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------|-------------------------------------------------------------|------------------|
-| `location` | The storage location
managed by Hadoop catalog.
| (none) | No
| 0.5.0 |
-| `filesystem-providers` | The names (split by
comma) of filesystem providers for the Hadoop catalog. Gravitino already
support built-in `builtin-local`(`local file`) and `builtin-hdfs`(`hdfs`). If
users want to support more file system and add it to Gravitino, they custom
more file system by implementing `FileSystemProvider`. | (none) | No
| 0.7.0-incubating |
-| `default-filesystem-provider` | The name default
filesystem providers of this Hadoop catalog if users do not specify the scheme
in the URI. Default value is `builtin-local`
| `builtin-local` | No
| 0.7.0-incubating |
-| `authentication.impersonation-enable` | Whether to enable
impersonation for the Hadoop catalog.
| `false` | No
| 0.5.1 |
-| `authentication.type` | The type of
authentication for Hadoop catalog, currently we only support `kerberos`,
`simple`.
| `simple` |
No | 0.5.1 |
-| `authentication.kerberos.principal` | The principal of the
Kerberos authentication
| (none) | required if the
value of `authentication.type` is Kerberos. | 0.5.1 |
-| `authentication.kerberos.keytab-uri` | The URI of The keytab
for the Kerberos authentication.
| (none) | required if the
value of `authentication.type` is Kerberos. | 0.5.1 |
-| `authentication.kerberos.check-interval-sec` | The check interval of
Kerberos credential for Hadoop catalog.
| 60 | No
| 0.5.1 |
-| `authentication.kerberos.keytab-fetch-timeout-sec` | The fetch timeout of
retrieving Kerberos keytab from `authentication.kerberos.keytab-uri`.
| 60 | No
| 0.5.1 |
-
-For more about `filesystem-providers`, please refer to
`HadoopFileSystemProvider` or `LocalFileSystemProvider` in the source code.
Furthermore, you also need to place the jar of the file system provider into
the `$GRAVITINO_HOME/catalogs/hadoop/libs` directory if it's not in the
classpath.
+| Property Name | Description | Default
Value | Required | Since Version |
+|---------------|-------------------------------------------------|---------------|----------|---------------|
+| `location` | The storage location managed by Hadoop catalog. | (none)
| No | 0.5.0 |
+
+Apart from the above properties, to access fileset like HDFS, S3, GCS, OSS or
custom fileset, you need to configure the following extra properties.
+
+#### HDFS fileset
+
+| Property Name | Description
|
Default Value | Required |
Since Version |
+|----------------------------------------------------|------------------------------------------------------------------------------------------------|---------------|------------------------------------------------------------|----------------|
+| `authentication.impersonation-enable` | Whether to enable
impersonation for the Hadoop catalog. |
`false` | No |
0.5.1 |
+| `authentication.type` | The type of
authentication for Hadoop catalog, currently we only support `kerberos`,
`simple`. | `simple` | No
| 0.5.1 |
+| `authentication.kerberos.principal` | The principal of the
Kerberos authentication |
(none) | required if the value of `authentication.type` is Kerberos.|
0.5.1 |
+| `authentication.kerberos.keytab-uri` | The URI of The keytab
for the Kerberos authentication. |
(none) | required if the value of `authentication.type` is Kerberos.|
0.5.1 |
+| `authentication.kerberos.check-interval-sec` | The check interval of
Kerberos credential for Hadoop catalog. | 60
| No | 0.5.1
|
+| `authentication.kerberos.keytab-fetch-timeout-sec` | The fetch timeout of
retrieving Kerberos keytab from `authentication.kerberos.keytab-uri`. | 60
| No | 0.5.1
|
+
+#### S3 fileset
+
+| Configuration item | Description
| Default value | Required | Since version
|
+|--------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------|---------------------------|------------------|
+| `filesystem-providers` | The file system providers to add. Set it to
`s3` if it's a S3 fileset, or a comma separated string that contains `s3` like
`gs,s3` to support multiple kinds of fileset including `s3`.
| (none) | Yes |
0.7.0-incubating |
+| `default-filesystem-provider` | The name default filesystem providers of
this Hadoop catalog if users do not specify the scheme in the URI. Default
value is `builtin-local`, for S3, if we set this value, we can omit the prefix
's3a://' in the location.| `builtin-local` | No |
0.7.0-incubating |
+| `s3-endpoint` | The endpoint of the AWS S3.
| (none) | Yes if it's a S3 fileset. |
0.7.0-incubating |
+| `s3-access-key-id` | The access key of the AWS S3.
| (none) | Yes if it's a S3 fileset. |
0.7.0-incubating |
+| `s3-secret-access-key` | The secret key of the AWS S3.
| (none) | Yes if it's a S3 fileset. |
0.7.0-incubating |
+
+At the same time, you need to place the corresponding bundle jar
[gravitino-aws-bundle-{version}.jar](https://repo1.maven.org/maven2/org/apache/gravitino/aws-bundle/)
in the directory ${GRAVITINO_HOME}/catalogs/hadoop/libs.
+
+#### GCS fileset
+
+| Configuration item | Description
| Default value | Required | Since version
|
+|-------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------|---------------------------|------------------|
+| `filesystem-providers` | The file system providers to add. Set it to
`gs` if it's a GCS fileset, a comma separated string that contains `gs` like
`gs,s3` to support multiple kinds of fileset including `gs`.
| (none) | Yes |
0.7.0-incubating |
+| `default-filesystem-provider` | The name default filesystem providers of
this Hadoop catalog if users do not specify the scheme in the URI. Default
value is `builtin-local`, for GCS, if we set this value, we can omit the prefix
'gs://' in the location.| `builtin-local` | No |
0.7.0-incubating |
+| `gcs-service-account-file` | The path of GCS service account JSON file.
| (none) | Yes if it's a GCS fileset.| 0.7.0-incubating
|
+
+In the meantime, you need to place the corresponding bundle jar
[gravitino-gcp-bundle-{version}.jar](https://repo1.maven.org/maven2/org/apache/gravitino/gcp-bundle/)
in the directory ${GRAVITINO_HOME}/catalogs/hadoop/libs.
+
+#### OSS fileset
+
+| Configuration item | Description
| Default value | Required | Since version
|
+|-------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------|---------------------------|------------------|
+| `filesystem-providers` | The file system providers to add. Set it to
`oss` if it's a OSS fileset, or a comma separated string that contains `oss`
like `oss,gs,s3` to support multiple kinds of fileset including `oss`.
| (none) | Yes |
0.7.0-incubating |
+| `default-filesystem-provider` | The name default filesystem providers of
this Hadoop catalog if users do not specify the scheme in the URI. Default
value is `builtin-local`, for OSS, if we set this value, we can omit the prefix
'oss://' in the location.| `builtin-local` | No |
0.7.0-incubating |
+| `oss-endpoint` | The endpoint of the Aliyun OSS.
| (none) | Yes if it's a OSS fileset.|
0.7.0-incubating |
+| `oss-access-key-id` | The access key of the Aliyun OSS.
| (none) | Yes if it's a OSS fileset.|
0.7.0-incubating |
+| `oss-secret-access-key` | The secret key of the Aliyun OSS.
| (none) | Yes if it's a OSS fileset.|
0.7.0-incubating |
+
+In the meantime, you need to place the corresponding bundle jar
[gravitino-aliyun-bundle-{version}.jar](https://repo1.maven.org/maven2/org/apache/gravitino/aliyun-bundle/)
in the directory ${GRAVITINO_HOME}/catalogs/hadoop/libs.
+
+:::note
+- Gravitino contains builtin file system providers for local file
system(`builtin-local`) and HDFS(`builtin-hdfs`), that is to say if
`filesystem-providers` is not set, Gravitino will still support local file
system and HDFS. Apart from that, you can set the `filesystem-providerss` to
support other file systems like S3, GCS, OSS or custom file system.
+- `default-filesystem-provider` is used to set the default file system
provider for the Hadoop catalog. If the user does not specify the scheme in the
URI, Gravitino will use the default file system provider to access the fileset.
For example, if the default file system provider is set to `builtin-local`, the
user can omit the prefix `file://` in the location.
+:::
+
+#### How to custom your own HCFS file system fileset?
+
+Developers and users can custom their own HCFS file system fileset by
implementing the `FileSystemProvider` interface in the jar
[gravitino-catalog-hadoop](https://repo1.maven.org/maven2/org/apache/gravitino/catalog-hadoop/)
. The `FileSystemProvider` interface is defined as follows:
+
+```java
+
+ // Create a FileSystem instance by the properties you have set when creating
the catalog.
+ FileSystem getFileSystem(@Nonnull Path path, @Nonnull Map<String, String>
config)
+ throws IOException;
+
+ // The schema name of the file system provider. 'file' for Local file system,
+ // 'hdfs' for HDFS, 's3a' for AWS S3, 'gs' for GCS, 'oss' for Aliyun OSS.
+ String scheme();
+
+ // Name of the file system provider. 'builtin-local' for Local file system,
'builtin-hdfs' for HDFS,
+ // 's3' for AWS S3, 'gcs' for GCS, 'oss' for Aliyun OSS.
+
+ // You need to set catalog properties `filesystem-providers` to support this
file system.
+ String name();
+```
+
+After implementing the `FileSystemProvider` interface, you need to put the jar
file into the `$GRAVITINO_HOME/catalogs/hadoop/libs` directory. Then you can
set the `filesystem-providers` property to use your custom file system provider.
+
### Authentication for Hadoop Catalog
diff --git a/docs/how-to-use-gvfs.md b/docs/how-to-use-gvfs.md
index 3a993e708..e3e03e131 100644
--- a/docs/how-to-use-gvfs.md
+++ b/docs/how-to-use-gvfs.md
@@ -49,22 +49,69 @@ the path mapping and convert automatically.
### Configuration
-| Configuration item | Description
| Default value | Required |
Since version |
-|-------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------|-------------------------------------|---------------|
-| `fs.AbstractFileSystem.gvfs.impl` | The Gravitino
Virtual File System abstract class, set it to
`org.apache.gravitino.filesystem.hadoop.Gvfs`.
| (none) |
Yes | 0.5.0 |
-| `fs.gvfs.impl` | The Gravitino
Virtual File System implementation class, set it to
`org.apache.gravitino.filesystem.hadoop.GravitinoVirtualFileSystem`.
| (none) | Yes
| 0.5.0 |
-| `fs.gvfs.impl.disable.cache` | Disable the
Gravitino Virtual File System cache in the Hadoop environment. If you need to
proxy multi-user operations, please set this value to `true` and create a
separate File System for each user. | `false` | No
| 0.5.0 |
-| `fs.gravitino.server.uri` | The Gravitino server
URI which GVFS needs to load the fileset metadata.
| (none) | Yes |
0.5.0 |
-| `fs.gravitino.client.metalake` | The metalake to
which the fileset belongs.
| (none) | Yes
| 0.5.0 |
-| `fs.gravitino.client.authType` | The auth type to
initialize the Gravitino client to use with the Gravitino Virtual File System.
Currently only supports `simple`, `oauth2` and `kerberos` auth types.
| `simple` | No
| 0.5.0 |
-| `fs.gravitino.client.oauth2.serverUri` | The auth server URI
for the Gravitino client when using `oauth2` auth type with the Gravitino
Virtual File System.
| (none) | Yes if you use `oauth2` auth type
| 0.5.0 |
-| `fs.gravitino.client.oauth2.credential` | The auth credential
for the Gravitino client when using `oauth2` auth type in the Gravitino Virtual
File System.
| (none) | Yes if you use `oauth2` auth type |
0.5.0 |
-| `fs.gravitino.client.oauth2.path` | The auth server path
for the Gravitino client when using `oauth2` auth type with the Gravitino
Virtual File System. Please remove the first slash `/` from the path, for
example `oauth/token`. | (none) | Yes if you use `oauth2` auth
type | 0.5.0 |
-| `fs.gravitino.client.oauth2.scope` | The auth scope for
the Gravitino client when using `oauth2` auth type with the Gravitino Virtual
File System.
| (none) | Yes if you use `oauth2` auth type |
0.5.0 |
-| `fs.gravitino.client.kerberos.principal` | The auth principal
for the Gravitino client when using `kerberos` auth type with the Gravitino
Virtual File System.
| (none) | Yes if you use `kerberos` auth type
| 0.5.1 |
-| `fs.gravitino.client.kerberos.keytabFilePath` | The auth keytab file
path for the Gravitino client when using `kerberos` auth type in the Gravitino
Virtual File System.
| (none) | No |
0.5.1 |
-| `fs.gravitino.fileset.cache.maxCapacity` | The cache capacity
of the Gravitino Virtual File System.
| `20` | No |
0.5.0 |
-| `fs.gravitino.fileset.cache.evictionMillsAfterAccess` | The value of time
that the cache expires after accessing in the Gravitino Virtual File System.
The value is in `milliseconds`.
| `3600000` | No
| 0.5.0 |
+| Configuration item | Description
| Default value | Required |
Since version |
+|-------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------|-------------------------------------|-----------------|
+| `fs.AbstractFileSystem.gvfs.impl` | The Gravitino
Virtual File System abstract class, set it to
`org.apache.gravitino.filesystem.hadoop.Gvfs`.
| (none) |
Yes | 0.5.0 |
+| `fs.gvfs.impl` | The Gravitino
Virtual File System implementation class, set it to
`org.apache.gravitino.filesystem.hadoop.GravitinoVirtualFileSystem`.
| (none) | Yes
| 0.5.0 |
+| `fs.gvfs.impl.disable.cache` | Disable the
Gravitino Virtual File System cache in the Hadoop environment. If you need to
proxy multi-user operations, please set this value to `true` and create a
separate File System for each user. | `false` | No
| 0.5.0 |
+| `fs.gravitino.server.uri` | The Gravitino server
URI which GVFS needs to load the fileset metadata.
| (none) | Yes |
0.5.0 |
+| `fs.gravitino.client.metalake` | The metalake to
which the fileset belongs.
| (none) | Yes |
0.5.0 |
+| `fs.gravitino.client.authType` | The auth type to
initialize the Gravitino client to use with the Gravitino Virtual File System.
Currently only supports `simple`, `oauth2` and `kerberos` auth types.
| `simple` | No |
0.5.0 |
+| `fs.gravitino.client.oauth2.serverUri` | The auth server URI
for the Gravitino client when using `oauth2` auth type with the Gravitino
Virtual File System.
| (none) | Yes if you use `oauth2` auth type
| 0.5.0 |
+| `fs.gravitino.client.oauth2.credential` | The auth credential
for the Gravitino client when using `oauth2` auth type in the Gravitino Virtual
File System.
| (none) | Yes if you use `oauth2` auth type |
0.5.0 |
+| `fs.gravitino.client.oauth2.path` | The auth server path
for the Gravitino client when using `oauth2` auth type with the Gravitino
Virtual File System. Please remove the first slash `/` from the path, for
example `oauth/token`. | (none) | Yes if you use `oauth2` auth
type | 0.5.0 |
+| `fs.gravitino.client.oauth2.scope` | The auth scope for
the Gravitino client when using `oauth2` auth type with the Gravitino Virtual
File System.
| (none) | Yes if you use `oauth2` auth type |
0.5.0 |
+| `fs.gravitino.client.kerberos.principal` | The auth principal
for the Gravitino client when using `kerberos` auth type with the Gravitino
Virtual File System.
| (none) | Yes if you use `kerberos` auth type
| 0.5.1 |
+| `fs.gravitino.client.kerberos.keytabFilePath` | The auth keytab file
path for the Gravitino client when using `kerberos` auth type in the Gravitino
Virtual File System.
| (none) | No |
0.5.1 |
+| `fs.gravitino.fileset.cache.maxCapacity` | The cache capacity
of the Gravitino Virtual File System.
| `20` | No |
0.5.0 |
+| `fs.gravitino.fileset.cache.evictionMillsAfterAccess` | The value of time
that the cache expires after accessing in the Gravitino Virtual File System.
The value is in `milliseconds`.
| `3600000` | No
| 0.5.0 |
+| `fs.gravitino.fileset.cache.evictionMillsAfterAccess` | The value of time
that the cache expires after accessing in the Gravitino Virtual File System.
The value is in `milliseconds`.
| `3600000` | No
| 0.5.0 |
+
+Apart from the above properties, to access fileset like S3, GCS, OSS and
custom fileset, you need to configure the following extra properties.
+
+#### S3 fileset
+
+| Configuration item | Description
| Default value |
Required | Since version |
+|--------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------|--------------------------|------------------|
+| `fs.gvfs.filesystem.providers` | The file system providers to add. Set it to
`s3` if it's a S3 fileset, or a comma separated string that contains `s3` like
`gs,s3` to support multiple kinds of fileset including `s3`.| (none) |
Yes if it's a S3 fileset.| 0.7.0-incubating |
+| `s3-endpoint` | The endpoint of the AWS S3.
| (none) |
Yes if it's a S3 fileset.| 0.7.0-incubating |
+| `s3-access-key-id` | The access key of the AWS S3.
| (none) |
Yes if it's a S3 fileset.| 0.7.0-incubating |
+| `s3-secret-access-key` | The secret key of the AWS S3.
| (none) |
Yes if it's a S3 fileset.| 0.7.0-incubating |
+
+At the same time, you need to place the corresponding bundle jar
[gravitino-aws-bundle-{version}.jar](https://repo1.maven.org/maven2/org/apache/gravitino/aws-bundle/)
in the Hadoop environment(typically located in
`${HADOOP_HOME}/share/hadoop/common/lib/`).
+
+
+#### GCS fileset
+
+| Configuration item | Description
| Default value |
Required | Since version |
+|--------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------|---------------------------|--------------------|
+| `fs.gvfs.filesystem.providers` | The file system providers to add. Set it to
`gs` if it's a GCS fileset, or a comma separated string that contains `gs` like
`gs,s3` to support multiple kinds of fileset including `gs`. | (none) |
Yes if it's a GCS fileset.| 0.7.0-incubating |
+| `gcs-service-account-file` | The path of GCS service account JSON file.
| (none) |
Yes if it's a GCS fileset.| 0.7.0-incubating |
+
+In the meantime, you need to place the corresponding bundle jar
[gravitino-gcp-bundle-{version}.jar](https://repo1.maven.org/maven2/org/apache/gravitino/gcp-bundle/)
in the Hadoop environment(typically located in
`${HADOOP_HOME}/share/hadoop/common/lib/`).
+
+
+#### OSS fileset
+
+| Configuration item | Description
| Default
value | Required | Since version |
+|---------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------|---------------------------|------------------|
+| `fs.gvfs.filesystem.providers` | The file system providers to add. Set it
to `oss` if it's a OSS fileset, or a comma separated string that contains `oss`
like `oss,gs,s3` to support multiple kinds of fileset including `oss`.| (none)
| Yes if it's a OSS fileset.| 0.7.0-incubating |
+| `oss-endpoint` | The endpoint of the Aliyun OSS.
| (none)
| Yes if it's a OSS fileset.| 0.7.0-incubating |
+| `oss-access-key-id` | The access key of the Aliyun OSS.
| (none)
| Yes if it's a OSS fileset.| 0.7.0-incubating |
+| `oss-secret-access-key` | The secret key of the Aliyun OSS.
| (none)
| Yes if it's a OSS fileset.| 0.7.0-incubating |
+
+In the meantime, you need to place the corresponding bundle jar
[gravitino-aliyun-bundle-{version}.jar](https://repo1.maven.org/maven2/org/apache/gravitino/aliyun-bundle/)
in the Hadoop environment(typically located in
`${HADOOP_HOME}/share/hadoop/common/lib/`).
+
+#### Custom fileset
+Since 0.7.0-incubating, users can define their own fileset type and configure
the corresponding properties, for more, please refer to [Custom
Fileset](./hadoop-catalog.md#how-to-custom-your-own-hcfs-file-system-fileset).
+So, if you want to access the custom fileset through GVFS, you need to
configure the corresponding properties.
+
+| Configuration item | Description
| Default value |
Required | Since version |
+|--------------------------------|---------------------------------------------------------------------------------------------------------|---------------|----------|------------------|
+| `fs.gvfs.filesystem.providers` | The file system providers. please set it to
the value of `YourCustomFileSystemProvider#name` | (none) |
Yes | 0.7.0-incubating |
+| `your-custom-properties` | The properties will be used to create a
FileSystem instance in `CustomFileSystemProvider#getFileSystem` | (none)
| No | - |
+
+
You can configure these properties in two ways:
@@ -76,9 +123,21 @@ You can configure these properties in two ways:
conf.set("fs.gvfs.impl","org.apache.gravitino.filesystem.hadoop.GravitinoVirtualFileSystem");
conf.set("fs.gravitino.server.uri","http://localhost:8090");
conf.set("fs.gravitino.client.metalake","test_metalake");
+
+ // Optional. It's only for S3 catalog. For GCS and OSS catalog, you should
set the corresponding properties.
+ conf.set("fs.gvfs.filesystem.providers", "s3");
+ conf.set("s3-endpoint", "http://localhost:9000");
+ conf.set("s3-access-key-id", "minio");
+ conf.set("s3-secret-access-key", "minio123");
+
Path filesetPath = new
Path("gvfs://fileset/test_catalog/test_schema/test_fileset_1");
FileSystem fs = filesetPath.getFileSystem(conf);
```
+
+:::note
+If you want to access the S3, GCS, OSS or custom fileset through GVFS, apart
from the above properties, you need to place the corresponding bundle jar in
the Hadoop environment.
+For example if you want to access the S3 fileset, you need to place the S3
bundle jar
[gravitino-aws-bundle-{version}.jar](https://repo1.maven.org/maven2/org/apache/gravitino/aws-bundle/)
in the Hadoop environment(typically located in
`${HADOOP_HOME}/share/hadoop/common/lib/`) or add it to the classpath.
+:::
2. Configure the properties in the `core-site.xml` file of the Hadoop
environment:
@@ -102,6 +161,24 @@ You can configure these properties in two ways:
<name>fs.gravitino.client.metalake</name>
<value>test_metalake</value>
</property>
+
+ <!-- Optional. It's only for S3 catalog. For GCs and OSS catalog, you
should set the corresponding properties. -->
+ <property>
+ <name>fs.gvfs.filesystem.providers</name>
+ <value>s3</value>
+ </property>
+ <property>
+ <name>s3-endpoint</name>
+ <value>http://localhost:9000</value>
+ </property>
+ <property>
+ <name>s3-access-key-id</name>
+ <value>minio</value>
+ </property>
+ <property>
+ <name>s3-secret-access-key</name>
+ <value>minio123</value>
+ </property>
```
### Usage examples
@@ -335,17 +412,42 @@ to recompile the native libraries like `libhdfs` and
others, and completely repl
### Configuration
-| Configuration item | Description
| Default value | Required | Since
version |
-|----------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------|---------------|-----------------------------------|------------------|
-| `server_uri` | The Gravitino server uri, e.g.
`http://localhost:8090`.
| (none) | Yes
| 0.6.0-incubating |
-| `metalake_name` | The metalake name which the fileset belongs to.
| (none) | Yes |
0.6.0-incubating |
-| `cache_size` | The cache capacity of the Gravitino Virtual File
System.
| `20` | No |
0.6.0-incubating |
-| `cache_expired_time` | The value of time that the cache expires after
accessing in the Gravitino Virtual File System. The value is in `seconds`.
| `3600` | No |
0.6.0-incubating |
-| `auth_type` | The auth type to initialize the Gravitino client to
use with the Gravitino Virtual File System. Currently supports `simple` and
`oauth2` auth types. | `simple` | No |
0.6.0-incubating |
-| `oauth2_server_uri` | The auth server URI for the Gravitino client when
using `oauth2` auth type.
| (none) | Yes if you use `oauth2` auth type |
0.7.0-incubating |
-| `oauth2_credential` | The auth credential for the Gravitino client when
using `oauth2` auth type.
| (none) | Yes if you use `oauth2` auth type |
0.7.0-incubating |
-| `oauth2_path` | The auth server path for the Gravitino client when
using `oauth2` auth type. Please remove the first slash `/` from the path, for
example `oauth/token`. | (none) | Yes if you use `oauth2` auth type |
0.7.0-incubating |
-| `oauth2_scope` | The auth scope for the Gravitino client when using
`oauth2` auth type with the Gravitino Virtual File System.
| (none) | Yes if you use `oauth2` auth type |
0.7.0-incubating |
+| Configuration item | Description
| Default value | Required |
Since version |
+|----------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------|---------------|-----------------------------------|------------------|
+| `server_uri` | The Gravitino server uri, e.g.
`http://localhost:8090`.
| (none) | Yes
| 0.6.0-incubating |
+| `metalake_name` | The metalake name which the fileset belongs to.
| (none) | Yes |
0.6.0-incubating |
+| `cache_size` | The cache capacity of the Gravitino Virtual
File System.
| `20` | No
| 0.6.0-incubating |
+| `cache_expired_time` | The value of time that the cache expires after
accessing in the Gravitino Virtual File System. The value is in `seconds`.
| `3600` | No |
0.6.0-incubating |
+| `auth_type` | The auth type to initialize the Gravitino
client to use with the Gravitino Virtual File System. Currently supports
`simple` and `oauth2` auth types. | `simple` | No
| 0.6.0-incubating |
+| `oauth2_server_uri` | The auth server URI for the Gravitino client
when using `oauth2` auth type.
| (none) | Yes if you use `oauth2` auth type
| 0.7.0-incubating |
+| `oauth2_credential` | The auth credential for the Gravitino client
when using `oauth2` auth type.
| (none) | Yes if you use `oauth2` auth type
| 0.7.0-incubating |
+| `oauth2_path` | The auth server path for the Gravitino client
when using `oauth2` auth type. Please remove the first slash `/` from the path,
for example `oauth/token`. | (none) | Yes if you use `oauth2` auth type
| 0.7.0-incubating |
+| `oauth2_scope` | The auth scope for the Gravitino client when
using `oauth2` auth type with the Gravitino Virtual File System.
| (none) | Yes if you use `oauth2` auth type
| 0.7.0-incubating |
+
+
+#### Extra configuration for S3, GCS, OSS fileset
+
+The following properties are required if you want to access the S3 fileset via
the GVFS python client:
+
+| Configuration item | Description | Default value |
Required | Since version |
+|----------------------------|------------------------------|---------------|--------------------------|------------------|
+| `s3_endpoint` | The endpoint of the AWS S3. | (none) |
Yes if it's a S3 fileset.| 0.7.0-incubating |
+| `s3_access_key_id` | The access key of the AWS S3.| (none) |
Yes if it's a S3 fileset.| 0.7.0-incubating |
+| `s3_secret_access_key` | The secret key of the AWS S3.| (none) |
Yes if it's a S3 fileset.| 0.7.0-incubating |
+
+The following properties are required if you want to access the GCS fileset
via the GVFS python client:
+
+| Configuration item | Description |
Default value | Required | Since version |
+|----------------------------|-------------------------------------------|---------------|---------------------------|------------------|
+| `gcs_service_account_file` | The path of GCS service account JSON file.|
(none) | Yes if it's a GCS fileset.| 0.7.0-incubating |
+
+The following properties are required if you want to access the OSS fileset
via the GVFS python client:
+
+| Configuration item | Description | Default
value | Required | Since version |
+|----------------------------|-----------------------------------|---------------|----------------------------|------------------|
+| `oss_endpoint` | The endpoint of the Aliyun OSS. | (none)
| Yes if it's a OSS fileset. | 0.7.0-incubating |
+| `oss_access_key_id` | The access key of the Aliyun OSS. | (none)
| Yes if it's a OSS fileset. | 0.7.0-incubating |
+| `oss_secret_access_key` | The secret key of the Aliyun OSS. | (none)
| Yes if it's a OSS fileset. | 0.7.0-incubating |
You can configure these properties when obtaining the `Gravitino Virtual
FileSystem` in Python like this:
@@ -355,10 +457,21 @@ options = {
"cache_size": 20,
"cache_expired_time": 3600,
"auth_type": "simple"
+
+ # Optional, the following properties are required if you want to access
the S3 fileset via GVFS python client, for GCS and OSS fileset, you should set
the corresponding properties.
+ "s3_endpoint": "http://localhost:9000",
+ "s3_access_key_id": "minio",
+ "s3_secret_access_key": "minio123"
}
fs = gvfs.GravitinoVirtualFileSystem(server_uri="http://localhost:8090",
metalake_name="test_metalake", options=options)
```
+:::note
+
+Gravitino python client does not support customized filesets defined by users
due to the limit of `fsspec` library.
+:::
+
+
### Usage examples
1. Make sure to obtain the Gravitino library.
diff --git a/docs/iceberg-rest-service.md b/docs/iceberg-rest-service.md
index 1f92240bc..4ba8be9c5 100644
--- a/docs/iceberg-rest-service.md
+++ b/docs/iceberg-rest-service.md
@@ -88,14 +88,15 @@ Gravitino Iceberg REST server supports OAuth2 and HTTPS,
please refer to [Securi
For JDBC backend, you can use the `gravitino.iceberg-rest.jdbc.user` and
`gravitino.iceberg-rest.jdbc.password` to authenticate the JDBC connection. For
Hive backend, you can use the `gravitino.iceberg-rest.authentication.type` to
specify the authentication type, and use the
`gravitino.iceberg-rest.authentication.kerberos.principal` and
`gravitino.iceberg-rest.authentication.kerberos.keytab-uri` to authenticate the
Kerberos connection.
The detailed configuration items are as follows:
-| Configuration item |
Description
| Default value | Required
| Since Version |
-|---------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------|------------------------------------------------------------------------------------|------------------|
-| `gravitino.iceberg-rest.authentication.type` |
The type of authentication for Iceberg rest catalog backend. This configuration
only applicable for for Hive backend, and only supports `Kerberos`, `simple`
currently. As for JDBC backend, only username/password authentication was
supported now. | `simple` | No
| 0.7.0-incubating |
-| `gravitino.iceberg-rest.authentication.impersonation-enable` |
Whether to enable impersonation for the Iceberg catalog
| `false` | No
| 0.7.0-incubating |
-| `gravitino.iceberg-rest.authentication.kerberos.principal` |
The principal of the Kerberos authentication
| (none) | required if the value of
`gravitino.iceberg-rest.authentication.type` is Kerberos. | 0.7.0-incubating |
-| `gravitino.iceberg-rest.authentication.kerberos.keytab-uri` |
The URI of The keytab for the Kerberos authentication.
| (none) | required if the value of
`gravitino.iceberg-rest.authentication.type` is Kerberos. | 0.7.0-incubating |
-| `gravitino.iceberg-rest.authentication.kerberos.check-interval-sec` |
The check interval of Kerberos credential for Iceberg catalog.
| 60 | No
| 0.7.0-incubating |
-| `gravitino.iceberg-rest.authentication.kerberos.keytab-fetch-timeout-sec` |
The fetch timeout of retrieving Kerberos keytab from
`authentication.kerberos.keytab-uri`.
| 60 | No
| 0.7.0-incubating |
+| Configuration item |
Description
| Default value | Required
[...]
+|---------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------|------------------------------------------------------------------------------------------------------------------------------------------------------
[...]
+| `gravitino.iceberg-rest.authentication.type` |
The type of authentication for Iceberg rest catalog backend. This configuration
only applicable for for Hive backend, and only supports `Kerberos`, `simple`
currently. As for JDBC backend, only username/password authentication was
supported now. | `simple` | No
[...]
+| `gravitino.iceberg-rest.authentication.impersonation-enable` |
Whether to enable impersonation for the Iceberg catalog
| `false` | No
[...]
+| `gravitino.iceberg-rest.hive.metastore.sasl.enabled` |
Whether to enable SASL authentication protocol when connect to Kerberos Hive
metastore.
| `false` | No, This value should be true in most case(Some
will use SSL protocol, but it rather rare) if the value of
`gravitino.iceberg-rest.authentication.typ [...]
+| `gravitino.iceberg-rest.authentication.kerberos.principal` |
The principal of the Kerberos authentication
| (none) | required if the value of
`gravitino.iceberg-rest.authentication.type` is Kerberos.
[...]
+| `gravitino.iceberg-rest.authentication.kerberos.keytab-uri` |
The URI of The keytab for the Kerberos authentication.
| (none) | required if the value of
`gravitino.iceberg-rest.authentication.type` is Kerberos.
[...]
+| `gravitino.iceberg-rest.authentication.kerberos.check-interval-sec` |
The check interval of Kerberos credential for Iceberg catalog.
| 60 | No
[...]
+| `gravitino.iceberg-rest.authentication.kerberos.keytab-fetch-timeout-sec` |
The fetch timeout of retrieving Kerberos keytab from
`authentication.kerberos.keytab-uri`.
| 60 | No
[...]
### Storage
diff --git a/docs/lakehouse-iceberg-catalog.md
b/docs/lakehouse-iceberg-catalog.md
index 2f13f10e7..f4cc48f89 100644
--- a/docs/lakehouse-iceberg-catalog.md
+++ b/docs/lakehouse-iceberg-catalog.md
@@ -137,14 +137,15 @@ Please set the `warehouse` parameter to
`{storage_prefix}://{bucket_name}/${pref
Users can use the following properties to configure the security of the
catalog backend if needed. For example, if you are using a Kerberos Hive
catalog backend, you must set `authentication.type` to `Kerberos` and provide
`authentication.kerberos.principal` and `authentication.kerberos.keytab-uri`.
-| Property name | Description
| Default value |
Required | Since Version |
-|----------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------|-------------------------------------------------------------|------------------|
-| `authentication.type` | The type of
authentication for Iceberg catalog backend. This configuration only applicable
for for Hive backend, and only supports `Kerberos`, `simple` currently. As for
JDBC backend, only username/password authentication was supported now. |
`simple` | No |
0.6.0-incubating |
-| `authentication.impersonation-enable` | Whether to enable
impersonation for the Iceberg catalog
| `false`
| No |
0.6.0-incubating |
-| `authentication.kerberos.principal` | The principal of the
Kerberos authentication
| (none) |
required if the value of `authentication.type` is Kerberos. | 0.6.0-incubating |
-| `authentication.kerberos.keytab-uri` | The URI of The keytab
for the Kerberos authentication.
| (none) |
required if the value of `authentication.type` is Kerberos. | 0.6.0-incubating |
-| `authentication.kerberos.check-interval-sec` | The check interval of
Kerberos credential for Iceberg catalog.
| 60 | No
| 0.6.0-incubating |
-| `authentication.kerberos.keytab-fetch-timeout-sec` | The fetch timeout of
retrieving Kerberos keytab from `authentication.kerberos.keytab-uri`.
| 60 |
No | 0.6.0-incubating |
+| Property name | Description
| Default value |
Required
| Since Version |
+|----------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------|
+| `authentication.type` | The type of
authentication for Iceberg catalog backend. This configuration only applicable
for for Hive backend, and only supports `Kerberos`, `simple` currently. As for
JDBC backend, only username/password authentication was supported now. |
`simple` | No
| 0.6.0-incubating |
+| `authentication.impersonation-enable` | Whether to enable
impersonation for the Iceberg catalog
| `false`
| No
| 0.6.0-incubating |
+| `hive.metastore.sasl.enabled` | Whether to enable SASL
authentication protocol when connect to Kerberos Hive metastore. This is a raw
Hive configuration
| `false` |
No, This value should be true in most case(Some will use SSL protocol, but it
rather rare) if the value of `gravitino.iceberg-rest.authentication.type` is
Kerberos. | 0.6.0-incubating |
+| `authentication.kerberos.principal` | The principal of the
Kerberos authentication
| (none) |
required if the value of `authentication.type` is Kerberos.
| 0.6.0-incubating |
+| `authentication.kerberos.keytab-uri` | The URI of The keytab
for the Kerberos authentication.
| (none) |
required if the value of `authentication.type` is Kerberos.
| 0.6.0-incubating |
+| `authentication.kerberos.check-interval-sec` | The check interval of
Kerberos credential for Iceberg catalog.
| 60 | No
| 0.6.0-incubating |
+| `authentication.kerberos.keytab-fetch-timeout-sec` | The fetch timeout of
retrieving Kerberos keytab from `authentication.kerberos.keytab-uri`.
| 60 |
No
| 0.6.0-incubating |
### Catalog operations
diff --git a/docs/lakehouse-paimon-catalog.md b/docs/lakehouse-paimon-catalog.md
index d23ad0a1b..8cb870552 100644
--- a/docs/lakehouse-paimon-catalog.md
+++ b/docs/lakehouse-paimon-catalog.md
@@ -29,22 +29,23 @@ Builds with Apache Paimon `0.8.0`.
### Catalog properties
-| Property name | Description
| Default value | Required
| Since Version |
-|----------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------|-----------------------------------------------------------------|-------------------|
-| `catalog-backend` | Catalog backend of
Gravitino Paimon catalog. Supports `filesystem`, `jdbc` and `hive`.
| (none) | Yes
| 0.6.0-incubating |
-| `uri` | The URI configuration
of the Paimon catalog. `thrift://127.0.0.1:9083` or
`jdbc:postgresql://127.0.0.1:5432/db_name` or
`jdbc:mysql://127.0.0.1:3306/metastore_db`. It is optional for
`FilesystemCatalog`. | (none) | required if the value of
`catalog-backend` is not `filesystem`. | 0.6.0-incubating |
-| `warehouse` | Warehouse directory of
catalog. `file:///user/hive/warehouse-paimon/` for local fs,
`hdfs://namespace/hdfs/path` for HDFS , `s3://{bucket-name}/path/` for S3 or
`oss://{bucket-name}/path` for Aliyun OSS | (none) | Yes
| 0.6.0-incubating |
-| `authentication.type` | The type of
authentication for Paimon catalog backend, currently Gravitino only supports
`Kerberos` and `simple`.
| `simple` | No
| 0.6.0-incubating |
-| `authentication.kerberos.principal` | The principal of the
Kerberos authentication.
| (none) | required if the value of
`authentication.type` is Kerberos. | 0.6.0-incubating |
-| `authentication.kerberos.keytab-uri` | The URI of The keytab
for the Kerberos authentication.
| (none) | required if the value of
`authentication.type` is Kerberos. | 0.6.0-incubating |
-| `authentication.kerberos.check-interval-sec` | The check interval of
Kerberos credential for Paimon catalog.
| 60 | No
| 0.6.0-incubating |
-| `authentication.kerberos.keytab-fetch-timeout-sec` | The fetch timeout of
retrieving Kerberos keytab from `authentication.kerberos.keytab-uri`.
| 60 | No
| 0.6.0-incubating |
-| `oss-endpoint` | The endpoint of the
Aliyun oss.
| (none) | required if the value of
`warehouse` is a oss path | 0.7.0-incubating |
-| `oss-access-key-id` | The access key of the
Aliyun oss.
| (none) | required if the value of
`warehouse` is a oss path | 0.7.0-incubating |
-| `oss-accesss-key-secret` | The secret key the
Aliyun s3.
| (none) | required if the value of
`warehouse` is a oss path | 0.7.0-incubating |
-| `s3-endpoint` | The endpoint of the AWS
s3.
| (none) | required if the value of
`warehouse` is a S3 path | 0.7.0-incubating |
-| `s3-access-key-id` | The access key of the
AWS s3.
| (none) | required if the value of
`warehouse` is a S3 path | 0.7.0-incubating |
-| `s3-secret-access-key` | The secret key of the
AWS s3.
| (none) | required if the value of
`warehouse` is a S3 path | 0.7.0-incubating |
+| Property name | Description
| Default value | Required
| Since Version |
+|----------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------|
+| `catalog-backend` | Catalog backend of
Gravitino Paimon catalog. Supports `filesystem`, `jdbc` and `hive`.
| (none) | Yes
| 0.6.0-incubating |
+| `uri` | The URI configuration
of the Paimon catalog. `thrift://127.0.0.1:9083` or
`jdbc:postgresql://127.0.0.1:5432/db_name` or
`jdbc:mysql://127.0.0.1:3306/metastore_db`. It is optional for
`FilesystemCatalog`. | (none) | required if the value of
`catalog-backend` is not `filesystem`.
| 0.6.0-incubating
|
+| `warehouse` | Warehouse directory of
catalog. `file:///user/hive/warehouse-paimon/` for local fs,
`hdfs://namespace/hdfs/path` for HDFS , `s3://{bucket-name}/path/` for S3 or
`oss://{bucket-name}/path` for Aliyun OSS | (none) | Yes
|
0.6.0-incubating |
+| `authentication.type` | The type of
authentication for Paimon catalog backend, currently Gravitino only supports
`Kerberos` and `simple`.
| `simple` | No
| 0.6.0-incubating |
+| `hive.metastore.sasl.enabled` | Whether to enable SASL
authentication protocol when connect to Kerberos Hive metastore. This is a raw
Hive configuration
| `false` | No, This value should be true in most
case(Some will use SSL protocol, but it rather rare) if the value of
`gravitino.iceberg-rest.authentication.type` is Kerberos. | 0.6.0-incubating |
+| `authentication.kerberos.principal` | The principal of the
Kerberos authentication.
| (none) | required if the value of
`authentication.type` is Kerberos.
| 0.6.0-incubating
|
+| `authentication.kerberos.keytab-uri` | The URI of The keytab
for the Kerberos authentication.
| (none) | required if the value of
`authentication.type` is Kerberos.
| 0.6.0-incubating
|
+| `authentication.kerberos.check-interval-sec` | The check interval of
Kerberos credential for Paimon catalog.
| 60 | No
| 0.6.0-incubating |
+| `authentication.kerberos.keytab-fetch-timeout-sec` | The fetch timeout of
retrieving Kerberos keytab from `authentication.kerberos.keytab-uri`.
| 60 | No
| 0.6.0-incubating |
+| `oss-endpoint` | The endpoint of the
Aliyun OSS.
| (none) | required if the value of `warehouse`
is a OSS path
| 0.7.0-incubating |
+| `oss-access-key-id` | The access key of the
Aliyun OSS.
| (none) | required if the value of `warehouse` is
a OSS path
| 0.7.0-incubating |
+| `oss-accesss-key-secret` | The secret key the
Aliyun OSS.
| (none) | required if the value of `warehouse`
is a OSS path
| 0.7.0-incubating |
+| `s3-endpoint` | The endpoint of the AWS
S3.
| (none) | required if the value of `warehouse` is a
S3 path
| 0.7.0-incubating |
+| `s3-access-key-id` | The access key of the
AWS S3.
| (none) | required if the value of `warehouse` is
a S3 path
| 0.7.0-incubating |
+| `s3-secret-access-key` | The secret key of the
AWS S3.
| (none) | required if the value of `warehouse` is
a S3 path
| 0.7.0-incubating |
:::note
If you want to use the `oss` or `s3` warehouse, you need to place related jars
in the `catalogs/lakehouse-paimon/lib` directory, more information can be found
in the [Paimon S3](https://paimon.apache.org/docs/master/filesystems/s3/).
diff --git a/docs/manage-fileset-metadata-using-gravitino.md
b/docs/manage-fileset-metadata-using-gravitino.md
index fe1d33040..b6ba3ed2e 100644
--- a/docs/manage-fileset-metadata-using-gravitino.md
+++ b/docs/manage-fileset-metadata-using-gravitino.md
@@ -52,6 +52,22 @@ curl -X POST -H "Accept: application/vnd.gravitino.v1+json" \
"location": "file:/tmp/root"
}
}' http://localhost:8090/api/metalakes/metalake/catalogs
+
+# create a S3 catalog
+curl -X POST -H "Accept: application/vnd.gravitino.v1+json" \
+-H "Content-Type: application/json" -d '{
+ "name": "catalog",
+ "type": "FILESET",
+ "comment": "comment",
+ "provider": "hadoop",
+ "properties": {
+ "location": "oss:/bucket/root",
+ "s3-access-key-id": "access_key",
+ "s3-secret-access-key": "secret_key",
+ "s3-endpoint": "http://oss-cn-hangzhou.aliyuncs.com",
+ "filesystem-providers": "s3"
+ }
+}' http://localhost:8090/api/metalakes/metalake/catalogs
```
</TabItem>
@@ -74,6 +90,21 @@ Catalog catalog = gravitinoClient.createCatalog("catalog",
"hadoop", // provider, Gravitino only supports "hadoop" for now.
"This is a Hadoop fileset catalog",
properties);
+
+// create a S3 catalog
+s3Properties = ImmutableMap.<String, String>builder()
+ .put("location", "oss:/bucket/root")
+ .put("s3-access-key-id", "access_key")
+ .put("s3-secret-access-key", "secret_key")
+ .put("s3-endpoint", "http://oss-cn-hangzhou.aliyuncs.com")
+ .put("filesystem-providers", "s3")
+ .build();
+
+Catalog s3Catalog = gravitinoClient.createCatalog("catalog",
+ Type.FILESET,
+ "hadoop", // provider, Gravitino only supports "hadoop" for now.
+ "This is a S3 fileset catalog",
+ s3Properties);
// ...
```
@@ -87,6 +118,20 @@ catalog = gravitino_client.create_catalog(name="catalog",
provider="hadoop",
comment="This is a Hadoop fileset
catalog",
properties={"location":
"/tmp/test1"})
+
+# create a S3 catalog
+s3_properties = {
+ "location": "oss:/bucket/root",
+ "s3-access-key-id": "access_key"
+ "s3-secret-access-key": "secret_key",
+ "s3-endpoint": "http://oss-cn-hangzhou.aliyuncs.com"
+}
+
+s3_catalog = gravitino_client.create_catalog(name="catalog",
+ type=Catalog.Type.FILESET,
+ provider="hadoop",
+ comment="This is a S3 fileset
catalog",
+ properties=s3_properties)
```
</TabItem>
@@ -314,6 +359,13 @@ Currently, Gravitino supports two **types** of filesets:
The `storageLocation` is the physical location of the fileset. Users can
specify this location
when creating a fileset, or follow the rules of the catalog/schema location if
not specified.
+The value of `storageLocation` depends on the configuration settings of the
catalog:
+- If this is a S3 fileset catalog, the `storageLocation` should be in the
format of `s3a://bucket-name/path/to/fileset`.
+- If this is an OSS fileset catalog, the `storageLocation` should be in the
format of `oss://bucket-name/path/to/fileset`.
+- If this is a local fileset catalog, the `storageLocation` should be in the
format of `file:/path/to/fileset`.
+- If this is a HDFS fileset catalog, the `storageLocation` should be in the
format of `hdfs://namenode:port/path/to/fileset`.
+- If this is a GCS fileset catalog, the `storageLocation` should be in the
format of `gs://bucket-name/path/to/fileset`.
+
For a `MANAGED` fileset, the storage location is:
1. The one specified by the user during the fileset creation.