rdblue commented on a change in pull request #3459:
URL: https://github.com/apache/iceberg/pull/3459#discussion_r744334568



##########
File path: 
spark/v3.2/spark/src/main/java/org/apache/iceberg/spark/source/SparkTable.java
##########
@@ -298,6 +317,97 @@ public void deleteWhere(Filter[] filters) {
     }
   }
 
+  @Override
+  public StructType partitionSchema() {
+    List<Types.NestedField> fields = 
icebergTable.spec().partitionType().fields();
+    StructField[] structFields = new StructField[fields.size()];
+    int index = 0;
+    for (Types.NestedField field : fields) {
+      StructField structField = new StructField(field.name(), 
SparkSchemaUtil.convert(field.type()), true,
+          Metadata.empty());
+      structFields[index] = structField;
+      ++index;
+    }
+    return new StructType(structFields);
+  }
+
+  @Override
+  public void createPartition(InternalRow ident, Map<String, String> 
properties)
+      throws PartitionAlreadyExistsException, UnsupportedOperationException {
+    throw new UnsupportedOperationException("not support create partition, use 
addFile procedure to refresh");

Review comment:
       One more thing: no need to suggest using `addFile`. The person that gets 
this error will not be familiar with the Iceberg or Spark internal APIs, so it 
isn't very useful to add.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to