Github user tejasapatil commented on a diff in the pull request:

    https://github.com/apache/spark/pull/15300#discussion_r81649882
  
    --- Diff: 
sql/hive/src/main/scala/org/apache/spark/sql/hive/execution/InsertIntoHiveTable.scala
 ---
    @@ -198,6 +195,30 @@ case class InsertIntoHiveTable(
           }
         }
     
    +    table.catalogTable.bucketSpec match {
    +      case Some(bucketSpec) =>
    +        // We can not populate bucketing information for Hive tables as 
Spark SQL has a different
    +        // implementation of hash function from Hive.
    +        // Hive native hashing will be supported after SPARK-17495. Until 
then, writes to bucketed
    +        // tables are allowed only if user does not care about maintaining 
table's bucketing
    +        // ie. both "hive.enforce.bucketing" and "hive.enforce.sorting" 
are set to false
    +
    +        val enforceBucketingConfig = "hive.enforce.bucketing"
    +        val enforceSortingConfig = "hive.enforce.sorting"
    +
    +        val message = s"Output Hive table ${table.catalogTable.identifier} 
is bucketed but Spark" +
    +          "currently does NOT populate bucketed output which is compatible 
with Hive."
    +
    +        if (hadoopConf.get(enforceBucketingConfig, "false").toBoolean ||
    +          hadoopConf.get(enforceSortingConfig, "false").toBoolean) {
    --- End diff --
    
    @viirya : Even right now on trunk if you try to insert data into a bucketed 
table, it will just work w/o producing bucketed output. I don't want to break 
that for existing users by making these true. The eventual goal would be to not 
have these configs and Spark should *always* produce data adhering to the 
tables' bucketing spec (without breaking existing pipelines).


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to