Saranviveka commented on issue #6853:
URL: https://github.com/apache/iceberg/issues/6853#issuecomment-1433588781
When we created partitions on date, year and month. on spark we are able to
operate on year value and month. But it has to be used in range.
scala> spark.sql("""select * from
iceberg.catbdsql_glue_catalog.iceberg_table where trans_ts= '2019'
""").show(false)
+--------+-----------+------------+--------+--------+
|order_id|customer_id|order_amount|category|trans_ts|
+--------+-----------+------------+--------+--------+
+--------+-----------+------------+--------+--------+
scala> spark.sql("""select * from
iceberg.catbdsql_glue_catalog.iceberg_table where trans_ts >= '2019' and
trans_ts < '2020' """).show(false)
+--------+-----------+------------+--------+-------------------+
|order_id|customer_id|order_amount|category|trans_ts |
+--------+-----------+------------+--------+-------------------+
|10001 |1 |6.17 |soap |2019-06-13 13:22:30|
+--------+-----------+------------+--------+-------------------+
scala> spark.sql("""select * from
iceberg.catbdsql_glue_catalog.iceberg_table where trans_ts >= '2019-06' and
trans_ts < '2019-07' """).show(false)
+--------+-----------+------------+--------+-------------------+
|order_id|customer_id|order_amount|category|trans_ts |
+--------+-----------+------------+--------+-------------------+
|10001 |1 |6.17 |soap |2019-06-13 13:22:30|
+--------+-----------+------------+--------+-------------------+
Whereas on other query engines the same is not possible. As those values are
validated against the data type of the field on which we are applying the
filter criteria. In addition, some engines require explicit casting upfront for
date and timestamp of the values. For those cases, we cant just provide yyyy or
yyyy-mm as engine throws errors!
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]