Kimahriman commented on a change in pull request #32448: URL: https://github.com/apache/spark/pull/32448#discussion_r627317144
########## File path: sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/SchemaMergeUtils.scala ########## @@ -80,7 +80,7 @@ object SchemaMergeUtils extends Logging { var mergedSchema = schemas.head schemas.tail.foreach { schema => try { - mergedSchema = mergedSchema.merge(schema) + mergedSchema = mergedSchema.merge(schema, sparkSession.sessionState.conf.resolver) Review comment: Trying to pull the resolver out of session state breaks a lot of the tests. I can revert most of these to not use the resolver and maintain the same behavior for now, but wanted to at least bring up the discussion if it makes sense to do resolver-aware schema merging here. Also, is there some other way you can/supposed to get a resolver? Or is it just a weird test thing? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org --------------------------------------------------------------------- To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org