nchammas commented on PR #44333:
URL: https://github.com/apache/spark/pull/44333#issuecomment-1856245494
Hmm, I am seeing a different plan:
```sql
spark-sql (default)> CREATE TABLE test (
> s string
> )
> PARTITIONED BY (
> dt string,
> hour string
> );
Time taken: 0.222 seconds
spark-sql (default)> explain select * from test where dt=20231212 and
hour=22;
== Physical Plan ==
Scan hive spark_catalog.default.test (1)
(1) Scan hive spark_catalog.default.test
Output [3]: [s#0, dt#1, hour#2]
Arguments: [s#0, dt#1, hour#2],
HiveTableRelation [
`spark_catalog`.`default`.`test`,
org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe,
Data Cols: [s#0], Partition Cols: [dt#1, hour#2], Pruned Partitions: []
], [isnotnull(dt#1), isnotnull(hour#2), (cast(dt#1 as int) = 20231212),
(cast(hour#2 as int) = 22)]
```
And if I try another table, but this time `USING parquet`, I see
`PartitionFilters` accepts the cast without issue:
```sql
spark-sql (default)> CREATE TABLE test (
> s string
> )
> USING parquet
> PARTITIONED BY (
> dt string,
> hour string
> );
Time taken: 0.1 seconds
spark-sql (default)> explain select * from test where dt=20231212 and
hour=22;
== Physical Plan ==
*(1) ColumnarToRow
+- FileScan parquet spark_catalog.default.test[s#19,dt#20,hour#21] Batched:
true, DataFilters: [], Format: Parquet,
Location: InMemoryFileIndex(0 paths)[],
PartitionFilters: [isnotnull(dt#20), isnotnull(hour#21), (cast(dt#20 as
int) = 20231212), (cast(hour#21 as int) = 22)],
PushedFilters: [], ReadSchema: struct<s:string>
```
Anyway, I don't mean to waste your time. I was just trying to reproduce the
issue, but it seems there are more details involved that I don't follow.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]