Lian,

Iceberg tables work great in S3. When creating the table, just pass the
`LOCATION` clause with an S3 path, or set your catalog's warehouse location
to S3 so tables are automatically created there.

The only restriction for S3 is that you need a metastore to track the table
metadata location because S3 doesn't have a way to implement a metadata
commit. For a metastore, there are implementations backed by the Hive
MetaStore, Glue/DynamoDB, and Nessie. And the upcoming release adds support
for DynamoDB without Glue and JDBC.

Ryan

On Mon, Aug 9, 2021 at 2:24 AM Eduard Tudenhoefner <edu...@dremio.com>
wrote:

> Lian you can have a look at https://iceberg.apache.org/aws/. It should
> contain all the info that you need. The codebase contains a *S3FileIO *class,
> which is an implementation that is backed by S3.
>
> On Mon, Aug 9, 2021 at 7:37 AM Lian Jiang <jiangok2...@gmail.com> wrote:
>
>> I am reading https://iceberg.apache.org/spark-writes/#spark-writes and
>> wondering if it is possible to create an iceberg table on S3. This guide
>> seems to say only write to a hive table (backed up by HDFS if I understand
>> correctly). Hudi and Delta can write to s3 with a specified S3 path. How
>> can I do it using iceberg? Thanks for any clarification.
>>
>>
>>

-- 
Ryan Blue
Tabular

Reply via email to