cloud-fan commented on code in PR #36027:
URL: https://github.com/apache/spark/pull/36027#discussion_r975330647
##########
sql/hive/src/main/scala/org/apache/spark/sql/hive/client/HiveClientImpl.scala:
##########
@@ -1026,7 +1026,14 @@ private[hive] object HiveClientImpl extends Logging {
} else {
CharVarcharUtils.getRawTypeString(c.metadata).getOrElse(c.dataType.catalogString)
}
- new FieldSchema(c.name, typeString, c.getComment().orNull)
+ val name = if (lowerCase) {
+ // scalastyle:off caselocale
+ c.name.toLowerCase
+ // scalastyle:on caselocale
+ } else {
+ c.name
Review Comment:
Looking at the related code again, I feel it's unnecessary to have this
`hive table -> spark table -> hive table` roundtrip when we need to pass the
raw table to call `HiveClient` APIs. Shall we expose a `getHiveTable` function
in `HiveClient` and let `HiveExternalCatalog` call it? This avoids the `spark
table -> hive table` conversion and we should be able to fix this bug.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]