Github user markgrover commented on the pull request:

    https://github.com/apache/spark/pull/10248#issuecomment-165333670
  
    In Hive or in Spark SQL, as I understand, the default partitioning mode is
    strict, due to which at least one of the partitions being inserted to has
    to be statically specified. When doing dynamic partitioning, this mode has
    to be changed before. In spark SQL, that means you have to issue a query
    like sqlContext.sql("SET hive.exec.dynamic.partition.mode=nonstrict")
    before running the actual SQL statement.
    
    Does that answer your question?
    
    On Wed, Dec 16, 2015 at 4:16 PM, Yin Huai <[email protected]> wrote:
    
    > What do you mean by mentioning SET
    > hive.exec.dynamic.partition.mode=nonstrict?
    >
    > —
    > Reply to this email directly or view it on GitHub
    > <https://github.com/apache/spark/pull/10248#issuecomment-165296868>.
    >



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to