Hi,

I have a hadoop table created from the Iceberg Java API that uses the
timestamp type for a column. Spark cannot work with the table because of
that.

*java.lang.UnsupportedOperationException: Spark does not support timestamp
without time zone fields*

I tried *sql("ALTER TABLE table ALTER COLUMN event_time TYPE timestamptz"")*
and got *"DataType timestamptz is not supported". *

I am able to manually *s/timestamp/timestamptz* the json metadata file of
the table created with hive metastore. However, this method does not work
for hadoop tables because checksum error is detected.

Any suggestions? Thank you.

--
Huadong

Reply via email to