felixYyu commented on issue #3494:
URL: https://github.com/apache/iceberg/issues/3494#issuecomment-962768767
spark.sql(
s"""
|CREATE TABLE IF NOT EXISTS hadoop_prod.$schemaName.$tableName (
| id string,
| data string,
| jydw_no string,
| ts timestamp)
|USING iceberg
|--PARTITIONED BY (bucket(12, id), days(ts), truncate(jydw_no, 5))
|PARTITIONED BY (days(ts))
|TBLPROPERTIES ('write.distribution-mode'='hash',
'write.metadata.compression-codec'='gzip',
| 'write.metadata.delete-after-commit.enabled'='true',
'write.metadata.previous-versions-max'='9')
|""".stripMargin)
spark.sql(
s"""
|INSERT OVERWRITE hadoop_prod.$schemaName.$tableName
| VALUES ($index, '$index', 'B3$index', CAST
(from_utc_timestamp('2021-11-08 07:00:00', 'Asia/Shanghai') AS timestamp))
|""".stripMargin)
in this DML the function of "from_utc_timestamp" is setted partitioned by
'2021-11-08', but unused fuction it is setted partitioned by '2021-11-07', how
to setting spark config of the partitioned by '2021-11-08'?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]