Thanks xianjin. It's working now.
I also created a PR to enhance the documentation
https://github.com/apache/iceberg/pull/9478
Thanks,
Manu
On Thu, Jan 11, 2024 at 11:08 AM xianjin wrote:
> You can create an Iceberg table with required field, for example:
>
> create table test_table (id bigint
You can create an Iceberg table with required field, for example:
create table test_table (id bigint not null, data string) using iceberg
However you can not change the optional field to required after creation.
See this issue for more details:
https://github.com/apache/iceberg/issues/3617
Manu
It looks like there's no way to explicitly add a required column in DDL.
Any suggestions?
Much appreciated
Manu
On Tue, Jan 9, 2024 at 3:37 PM Manu Zhang wrote:
> Thanks Peter and Ryan for the info.
>
> As identifier fields need to be "required", how can I alter an optional
> column to be requi
Thanks Peter and Ryan for the info.
As identifier fields need to be "required", how can I alter an optional
column to be required in Spark SQL?
Thanks,
Manu
On Fri, Jan 5, 2024 at 12:50 AM Ryan Blue wrote:
> You can set the primary key fields in Spark using `ALTER TABLE`:
>
> `ALTER TABLE t SE
You can set the primary key fields in Spark using `ALTER TABLE`:
`ALTER TABLE t SET IDENTIFIER FIELDS id`
Spark doesn't support any primary key syntax, so you have to do this as a
separate step.
On Thu, Jan 4, 2024 at 8:46 AM Péter Váry
wrote:
> Hi Manu,
>
> The Iceberg Schema defines `identif
Hi Manu,
The Iceberg Schema defines `identifierFieldIds` method [1], and Flink uses
that as the primary key.
Are you saying there is no way to set it in Spark and Trino?
Thanks,
Peter
[1]
https://github.com/apache/iceberg/blob/9a00f7477dedac4501fb2de9e1e6d7aa83dc20b7/api/src/main/java/org/apache