Github user viirya commented on a diff in the pull request:
https://github.com/apache/spark/pull/15300#discussion_r81500407
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/execution/InsertIntoHiveTable.scala
---
@@ -198,6 +195,30 @@ case class InsertIntoHiveTable(
}
}
+ table.catalogTable.bucketSpec match {
+ case Some(bucketSpec) =>
+ // We can not populate bucketing information for Hive tables as
Spark SQL has a different
+ // implementation of hash function from Hive.
+ // Hive native hashing will be supported after SPARK-17495. Until
then, writes to bucketed
+ // tables are allowed only if user does not care about maintaining
table's bucketing
+ // ie. both "hive.enforce.bucketing" and "hive.enforce.sorting"
are set to false
+
+ val enforceBucketingConfig = "hive.enforce.bucketing"
+ val enforceSortingConfig = "hive.enforce.sorting"
+
+ val message = s"Output Hive table ${table.catalogTable.identifier}
is bucketed but Spark" +
+ "currently does NOT populate bucketed output which is compatible
with Hive."
+
+ if (hadoopConf.get(enforceBucketingConfig, "false").toBoolean ||
+ hadoopConf.get(enforceSortingConfig, "false").toBoolean) {
--- End diff --
Are the default values (`false`) for these two configs safe? If user
doesn't aware of it, it could insert non compatible data into bucketed Hive
table.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]