Github user tejasapatil commented on a diff in the pull request:

    https://github.com/apache/spark/pull/17644#discussion_r116414803
  
    --- Diff: 
sql/hive/src/main/scala/org/apache/spark/sql/hive/execution/InsertIntoHiveTable.scala
 ---
    @@ -307,6 +307,27 @@ case class InsertIntoHiveTable(
           }
         }
     
    +    table.bucketSpec match {
    +      case Some(bucketSpec) =>
    +        // Writes to bucketed hive tables are allowed only if user does 
not care about maintaining
    +        // table's bucketing ie. both "hive.enforce.bucketing" and 
"hive.enforce.sorting" are
    +        // set to false
    +        val enforceBucketingConfig = "hive.enforce.bucketing"
    +        val enforceSortingConfig = "hive.enforce.sorting"
    +
    +        val message = s"Output Hive table ${table.identifier} is bucketed 
but Spark" +
    +          "currently does NOT populate bucketed output which is compatible 
with Hive."
    +
    +        if (hadoopConf.get(enforceBucketingConfig, "true").toBoolean ||
    +          hadoopConf.get(enforceSortingConfig, "true").toBoolean) {
    +          throw new AnalysisException(message)
    +        } else {
    +          logWarning(message + s" Inserting data anyways since both 
$enforceBucketingConfig and " +
    +            s"$enforceSortingConfig are set to false.")
    --- End diff --
    
    In hive: It would lead to wrong result.
    
    In spark (over master and also after this PR): the table scan operation 
does not take bucketing into account so it would be read as a regular table. 
So, it won't be read "wrong", its just that we wont take advantage of bucketing.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to