openinx opened a new issue #2572:
URL: https://github.com/apache/iceberg/issues/2572


   After collecting some feedback from our flink users,  now we have to create 
a `catalog` firstly,  then switch to the `catalog` and `database` ,  finally we 
could create the apache iceberg table under the `catalog`.  ( see 
https://iceberg.apache.org/flink/). 
   
   In fact, for most of the flink sql users,  the [straightforward 
way](https://ci.apache.org/projects/flink/flink-docs-release-1.13/docs/connectors/table/overview/#how-to-use-connectors)
 is :  creating a iceberg table with `connector=iceberg` options and catalog 
properties as the iceberg table properties.
   
   For example if we want to create an iceberg table under a given hive 
catalog,  we could execute: 
   
   ```sql
   CREATE TABLE iceberg_sample (
        id BIGINT,
        data STRING
   ) WITH (
        'connector'='iceberg',
        'catalog-type'='hive',
        'uri'='thrift://localhost:9083',
        'clients'='5',
        'property-version'='1',
        'warehouse'='hdfs://nn:8020/warehouse/path'
   );
   ``` 
   
   If we want to create an iceberg table under a given hadoop catalog, then we 
could execute: 
   
   ```sql
   CREATE TABLE iceberg_sample(
        id BIGINT,
        data STRING
   ) WITH (
        'connector'='iceberg',
        'catalog-type'='hadoop',
        'warehouse'='hdfs://nn:8020/warehouse/path'
   );
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to