felixYyu commented on a change in pull request #3862:
URL: https://github.com/apache/iceberg/pull/3862#discussion_r793199489
##########
File path:
spark/v3.2/spark/src/main/java/org/apache/iceberg/spark/source/SparkTable.java
##########
@@ -272,6 +290,80 @@ public void deleteWhere(Filter[] filters) {
}
}
+ @Override
+ public StructType partitionSchema() {
+ Types.StructType structType = Partitioning.partitionType(table());
+ List<Types.NestedField> structFields =
Lists.newArrayListWithExpectedSize(structType.fields().size());
+ structType.fields().forEach(nestedField -> {
+ if (nestedField.name().endsWith("hour") ||
+ nestedField.name().endsWith("month") ||
+ nestedField.name().endsWith("year")) {
+ structFields.add(Types.NestedField.optional(nestedField.fieldId(),
nestedField.name(), Types.StringType.get()));
+ } else {
+ // do nothing
+ structFields.add(nestedField);
+ }
+ });
+
+ return (StructType)
SparkSchemaUtil.convert(Types.StructType.of(structFields));
+ }
+
+ @Override
+ public void createPartition(InternalRow ident, Map<String, String>
properties) throws UnsupportedOperationException {
+ throw new UnsupportedOperationException("Cannot explicitly create
partitions in Iceberg tables");
+ }
+
+ @Override
+ public boolean dropPartition(InternalRow ident) {
+ throw new UnsupportedOperationException("Cannot explicitly drop partitions
in Iceberg tables");
+ }
+
+ @Override
+ public void replacePartitionMetadata(InternalRow ident, Map<String, String>
properties)
+ throws UnsupportedOperationException {
+ throw new UnsupportedOperationException("Iceberg partitions do not support
metadata");
+ }
+
+ @Override
+ public Map<String, String> loadPartitionMetadata(InternalRow ident) throws
UnsupportedOperationException {
+ throw new UnsupportedOperationException("Iceberg partitions do not support
metadata");
+ }
+
+ @Override
+ public InternalRow[] listPartitionIdentifiers(String[] names, InternalRow
ident) {
+ // support show partitions
+ List<InternalRow> rows = Lists.newArrayList();
+ Dataset<Row> df = SparkTableUtil.loadMetadataTable(sparkSession(),
icebergTable, MetadataTableType.PARTITIONS)
+ .select("partition");
+ StructType schema = partitionSchema();
+ if (names.length > 0) {
Review comment:
iceberg Timestamps and Dates dataType transform, the hour month year
three partition schema datatype are all IntegerType, so spark sql
ShowPartitionsExec `row.get(i, dataType)` with int type show.
```
public Type getResultType(Type sourceType) {
if (granularity == ChronoUnit.DAYS) {
return Types.DateType.get();
}
return Types.IntegerType.get();
}
```
spark sql ShowPartitionsExec Physical plan node for showing partitions,if do
not transformation,the test result is follow, so I think should transformation:
1.do not transformation case
hours(ts):
```
+---------------------------------------------------+
|partition |
+---------------------------------------------------+
|ts_hour=429536 |
+---------------------------------------------------+
```
days(ts): error message
`requirement failed: Literal must have a corresponding value to date,
but class Date found.`
months(ts):
```
+---------------------------------------------------+
|partition |
+---------------------------------------------------+
|ts_month=588 |
+---------------------------------------------------+
```
years(ts):
```
+---------------------------------------------------+
|partition |
+---------------------------------------------------+
|ts_year=49 |
+---------------------------------------------------+
```
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]