szehon-ho commented on code in PR #51506:
URL: https://github.com/apache/spark/pull/51506#discussion_r2214561309


##########
sql/catalyst/src/test/scala/org/apache/spark/sql/connector/catalog/InMemoryBaseTable.scala:
##########
@@ -571,6 +591,17 @@ abstract class InMemoryBaseTable(
       override def reportDriverMetrics(): Array[CustomTaskMetric] = {
         Array(new InMemoryCustomDriverTaskMetric(rows.size))
       }
+
+      def mergeSchema(oldType: StructType, newType: StructType): StructType = {
+        val (oldFields, newFields) = (oldType.fields, newType.fields)
+
+        // this does not override the old field with the new field with same 
name for now
+        val nameToFieldMap = oldFields.map (f => f.name -> f).toMap
+        val remainingNewFields = newFields.filterNot (f => 
nameToFieldMap.contains (f.name) )

Review Comment:
   Yea its up to the DSV2 implementation, for example Iceberg uses a config 
[1], which actually is the spark config `spark.sql.caseSensitive` [2].   ref: 
   
   1. 
https://github.com/apache/iceberg/blob/main/spark/v4.0/spark/src/main/java/org/apache/iceberg/spark/source/SparkWriteBuilder.java#L199
   2. 
https://github.com/apache/iceberg/blob/main/spark/v4.0/spark/src/main/java/org/apache/iceberg/spark/SparkWriteConf.java#L456
   
   I was just making a simple example for InMemoryTable, should I can make it 
more complex and take this property into account?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to