kbendick commented on a change in pull request #3633:
URL: https://github.com/apache/iceberg/pull/3633#discussion_r759718222



##########
File path: 
spark/v3.2/spark/src/main/java/org/apache/spark/sql/connector/iceberg/write/RowLevelOperation.java
##########
@@ -0,0 +1,89 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.spark.sql.connector.iceberg.write;
+
+import org.apache.spark.sql.connector.expressions.NamedReference;
+import org.apache.spark.sql.connector.read.ScanBuilder;
+import org.apache.spark.sql.connector.write.WriteBuilder;
+import org.apache.spark.sql.util.CaseInsensitiveStringMap;
+
+/**
+ * A logical representation of a data source DELETE, UPDATE, or MERGE 
operation that requires
+ * rewriting data.
+ */
+public interface RowLevelOperation {
+
+  /**
+   * The SQL operation being performed.
+   */
+  enum Command {
+    DELETE, UPDATE, MERGE
+  }
+
+  /**
+   * Returns the description associated with this row-level operation.
+   */
+  default String description() {
+    return this.getClass().toString();
+  }

Review comment:
       Question: Is this something that will appear in the Spark UI or in the 
query execution plan?
   
   If so, it might be better to leave it as abstract. I find that when we have 
default implementations, people tend to skip them pretty often.
   
   But if it's not visibly presently anywhere, it's definitely not a big 
concern.

##########
File path: 
spark/v3.2/spark/src/main/java/org/apache/spark/sql/connector/iceberg/write/RowLevelOperation.java
##########
@@ -0,0 +1,89 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.spark.sql.connector.iceberg.write;
+
+import org.apache.spark.sql.connector.expressions.NamedReference;
+import org.apache.spark.sql.connector.read.ScanBuilder;
+import org.apache.spark.sql.connector.write.WriteBuilder;
+import org.apache.spark.sql.util.CaseInsensitiveStringMap;
+
+/**
+ * A logical representation of a data source DELETE, UPDATE, or MERGE 
operation that requires
+ * rewriting data.
+ */
+public interface RowLevelOperation {
+
+  /**
+   * The SQL operation being performed.
+   */
+  enum Command {
+    DELETE, UPDATE, MERGE

Review comment:
       Question for my own understanding:
   
   For Delta Changefeed, they use `update_before` and `update_after` in the 
resulting dataframe on read.
   
   Since this is the write interface, I assume we don't need that. Will Spark 
be able to express that on read in the future?

##########
File path: 
spark/v3.2/spark/src/main/java/org/apache/spark/sql/connector/iceberg/write/ExtendedLogicalWriteInfo.java
##########
@@ -0,0 +1,38 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.spark.sql.connector.iceberg.write;
+
+import org.apache.spark.sql.connector.write.LogicalWriteInfo;
+import org.apache.spark.sql.types.StructType;
+
+/**
+ * A class that holds logical write information not covered by 
LogicalWriteInfo in Spark.
+ */
+public interface ExtendedLogicalWriteInfo extends LogicalWriteInfo {
+  /**
+   * the schema of the input metadata from Spark to data source.
+   */
+  StructType metadataSchema();
+
+  /**
+   * the schema of the ID columns from Spark to data source.

Review comment:
       Same note on capitalization.

##########
File path: 
spark/v3.2/spark/src/main/java/org/apache/spark/sql/connector/iceberg/write/ExtendedLogicalWriteInfo.java
##########
@@ -0,0 +1,38 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.spark.sql.connector.iceberg.write;
+
+import org.apache.spark.sql.connector.write.LogicalWriteInfo;
+import org.apache.spark.sql.types.StructType;
+
+/**
+ * A class that holds logical write information not covered by 
LogicalWriteInfo in Spark.
+ */
+public interface ExtendedLogicalWriteInfo extends LogicalWriteInfo {
+  /**
+   * the schema of the input metadata from Spark to data source.
+   */
+  StructType metadataSchema();

Review comment:
       Nit / Non-blocking: Consider capitalizing the javadoc sentence `The`.

##########
File path: 
spark/v3.2/spark/src/main/java/org/apache/spark/sql/connector/iceberg/write/RowLevelOperation.java
##########
@@ -0,0 +1,89 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.spark.sql.connector.iceberg.write;
+
+import org.apache.spark.sql.connector.expressions.NamedReference;
+import org.apache.spark.sql.connector.read.ScanBuilder;
+import org.apache.spark.sql.connector.write.WriteBuilder;
+import org.apache.spark.sql.util.CaseInsensitiveStringMap;
+
+/**
+ * A logical representation of a data source DELETE, UPDATE, or MERGE 
operation that requires
+ * rewriting data.
+ */
+public interface RowLevelOperation {
+
+  /**
+   * The SQL operation being performed.
+   */
+  enum Command {
+    DELETE, UPDATE, MERGE
+  }
+
+  /**
+   * Returns the description associated with this row-level operation.
+   */
+  default String description() {
+    return this.getClass().toString();
+  }
+
+  /**
+   * Returns the actual SQL operation being performed.
+   */
+  Command command();
+
+  /**
+   * Returns a scan builder to configure a scan for this row-level operation.
+   * <p>
+   * Sources fall into two categories: those that can handle a delta of rows 
and those that need
+   * to replace groups (e.g. partitions, files). Sources that handle deltas 
allow Spark to quickly
+   * discard unchanged rows and have no requirements for input scans. Sources 
that replace groups

Review comment:
       I'm +1 on being able to gather more metrics. Perhaps that's something 
that could be put behind a flag, similar to how cardinality estimation is put 
behind a flag for `MERGE INTO`?
   
   I can see situations where we don't need the metrics or where gathering the 
metrics is prohibitively slow, but I do agree with preferring to keep them as a 
possibility whenever is feasible.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to