cloud-fan commented on code in PR #36027:
URL: https://github.com/apache/spark/pull/36027#discussion_r966724740
##########
sql/hive/src/main/scala/org/apache/spark/sql/hive/client/HiveClientImpl.scala:
##########
@@ -1095,7 +1095,11 @@ private[hive] object HiveClientImpl extends Logging {
table.bucketSpec match {
case Some(bucketSpec) if !HiveExternalCatalog.isDatasourceTable(table) =>
hiveTable.setNumBuckets(bucketSpec.numBuckets)
- hiveTable.setBucketCols(bucketSpec.bucketColumnNames.toList.asJava)
Review Comment:
AFAIK Spark stores table schema and bucket spec as table properties, and
restores it back when reading the table from Hive metastore. So it is always
case preserving in Spark, for both schema and bucket spec. Is there something
wrong?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]