JingsongLi commented on code in PR #1836:
URL: https://github.com/apache/incubator-paimon/pull/1836#discussion_r1296938201
##########
paimon-spark/paimon-spark-common/src/main/java/org/apache/paimon/spark/SparkTable.java:
##########
@@ -137,4 +153,97 @@ public Map<String, String> properties() {
return Collections.emptyMap();
}
}
+
+ @Override
+ public StructType partitionSchema() {
+ List<String> partitionKeys = table.partitionKeys();
+ RowType rowType =
+ new RowType(
+ table.rowType().getFields().stream()
+ .filter(dataField ->
partitionKeys.contains(dataField.name()))
+ .collect(Collectors.toList()));
+ return SparkTypeUtils.fromPaimonRowType(rowType);
+ }
+
+ @Override
+ public void createPartition(InternalRow internalRow, Map<String, String>
map)
+ throws PartitionsAlreadyExistException,
UnsupportedOperationException {
+ throw new UnsupportedOperationException();
+ }
+
+ @Override
+ public boolean dropPartition(InternalRow internalRow) {
+ StructType structType = partitionSchema();
+ StructField[] fields = structType.fields();
+ HashMap<String, String> partitions = new HashMap<>();
Review Comment:
Can you use `RowDataPartitionComputer` here?
And we can convert internalRow to paimon `SparkRow`.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]