choucmei commented on code in PR #1836:
URL: https://github.com/apache/incubator-paimon/pull/1836#discussion_r1297136704


##########
paimon-spark/paimon-spark-common/src/main/java/org/apache/paimon/spark/SparkTable.java:
##########
@@ -137,4 +153,97 @@ public Map<String, String> properties() {
             return Collections.emptyMap();
         }
     }
+
+    @Override
+    public StructType partitionSchema() {
+        List<String> partitionKeys = table.partitionKeys();
+        RowType rowType =
+                new RowType(
+                        table.rowType().getFields().stream()
+                                .filter(dataField -> 
partitionKeys.contains(dataField.name()))
+                                .collect(Collectors.toList()));
+        return SparkTypeUtils.fromPaimonRowType(rowType);
+    }
+
+    @Override
+    public void createPartition(InternalRow internalRow, Map<String, String> 
map)
+            throws PartitionsAlreadyExistException, 
UnsupportedOperationException {
+        throw new UnsupportedOperationException();
+    }
+
+    @Override
+    public boolean dropPartition(InternalRow internalRow) {
+        StructType structType = partitionSchema();
+        StructField[] fields = structType.fields();
+        HashMap<String, String> partitions = new HashMap<>();

Review Comment:
   Do you mean it like this?
   ```
           List<String> partitionKeys = table.partitionKeys();
           RowType rowType =
                   new RowType(
                           table.rowType().getFields().stream()
                                   .filter(dataField -> 
partitionKeys.contains(dataField.name()))
                                   .collect(Collectors.toList()));
           StructType structType = SparkTypeUtils.fromPaimonRowType(rowType);
           String[] partitionCols = table.partitionKeys().toArray(new 
String[partitionKeys.size()]);
           RowDataPartitionComputer rowDataPartitionComputer = new 
RowDataPartitionComputer(FileStorePathFactory.PARTITION_DEFAULT_NAME.defaultValue(),
 rowType, partitionCols);
           LinkedHashMap<String, String> partitions = 
rowDataPartitionComputer.generatePartValues(new SparkRow(rowType, 
Row.fromSeq(internalRow.toSeq(structType))));
   ```
   The `internalRow` needs to be converted to a `Row` and then wrapped using 
`SparkRow`.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to