KnightChess opened a new issue #3598:
URL: https://github.com/apache/iceberg/issues/3598
```sql
CREATE TABLE default.test (id bigint, age bigint, ts timestamp) USING
iceberg PARTITIONED BY (days(ts))
insert into default.test select 1 as id, 10 as age, to_timestamp('1970-01-02
01:00:00') as ts;
insert into default.test select 1 as id, 10 as age, to_timestamp('1970-01-02
01:00:00') as ts;
ALTER TABLE default.test drop partition field days(ts)
select * from default.test.partitions;
```
stack info:
```shell
java.lang.IllegalArgumentException: Wrong class, java.lang.Long, for object: 0
at org.apache.iceberg.PartitionData.get(PartitionData.java:120)
at
org.apache.iceberg.types.Comparators$StructLikeComparator.compare(Comparators.java:122)
at
org.apache.iceberg.types.Comparators$StructLikeComparator.compare(Comparators.java:102)
at
org.apache.iceberg.util.StructLikeWrapper.equals(StructLikeWrapper.java:76)
at java.util.HashMap.getNode(HashMap.java:571)
at java.util.HashMap.get(HashMap.java:556)
at
org.apache.iceberg.PartitionsTable$PartitionMap.get(PartitionsTable.java:152)
at
org.apache.iceberg.PartitionsTable.partitions(PartitionsTable.java:101)
at org.apache.iceberg.PartitionsTable.task(PartitionsTable.java:75)
at
org.apache.iceberg.PartitionsTable.access$300(PartitionsTable.java:35)
at
org.apache.iceberg.PartitionsTable$PartitionsScan.lambda$new$0(PartitionsTable.java:137)
at org.apache.iceberg.StaticTableScan.planFiles(StaticTableScan.java:72)
at org.apache.iceberg.BaseTableScan.planFiles(BaseTableScan.java:208)
at org.apache.iceberg.BaseTableScan.planTasks(BaseTableScan.java:241)
at
org.apache.iceberg.spark.source.SparkBatchQueryScan.tasks(SparkBatchQueryScan.java:122)
at
org.apache.iceberg.spark.source.SparkBatchScan.planInputPartitions(SparkBatchScan.java:143)
at
org.apache.spark.sql.execution.datasources.v2.BatchScanExec.partitions$lzycompute(BatchScanExec.scala:52)
at
org.apache.spark.sql.execution.datasources.v2.BatchScanExec.partitions(BatchScanExec.scala:52)
at
org.apache.spark.sql.execution.datasources.v2.DataSourceV2ScanExecBase.supportsColumnar(DataSourceV2ScanExecBase.scala:93)
at
org.apache.spark.sql.execution.datasources.v2.DataSourceV2ScanExecBase.supportsColumnar$(DataSourceV2ScanExecBase.scala:92)
at
org.apache.spark.sql.execution.datasources.v2.BatchScanExec.supportsColumnar(BatchScanExec.scala:35)
at
org.apache.spark.sql.execution.datasources.v2.DataSourceV2Strategy.apply(DataSourceV2Strategy.scala:123)
at
org.apache.spark.sql.catalyst.planning.QueryPlanner.$anonfun$plan$1(QueryPlanner.scala:63)
```
the code in:
```java
static class PartitionMap {
private final Map<StructLikeWrapper, Partition> partitions =
Maps.newHashMap();
private final Types.StructType type;
private final StructLikeWrapper reused;
PartitionMap(Types.StructType type) {
this.type = type;
this.reused = StructLikeWrapper.forType(type);
}
Partition get(StructLike key) {
Partition partition = partitions.get(reused.set(key));
if (partition == null) {
partition = new Partition(key);
partitions.put(StructLikeWrapper.forType(type).set(key), partition);
}
return partition;
}
Iterable<Partition> all() {
return partitions.values();
}
}
```
the `Map<StructLikeWrapper, Partition> partitions = Maps.newHashMap()` will
store the same partition structLike. When there has hash conflict(two data
files in the same partition), HashMap will compare `StructLike key`.
After drop partition, the partition data type is `timestamp`, not
`datetype`, valueClass is `Long`, but the partiton value is Integer.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]