[
https://issues.apache.org/jira/browse/SPARK-14459?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15276493#comment-15276493
]
Ryan Blue commented on SPARK-14459:
-----------------------------------
Thank you [~lian cheng]!
> SQL partitioning must match existing tables, but is not checked.
> ----------------------------------------------------------------
>
> Key: SPARK-14459
> URL: https://issues.apache.org/jira/browse/SPARK-14459
> Project: Spark
> Issue Type: Bug
> Components: SQL
> Affects Versions: 2.0.0
> Reporter: Ryan Blue
> Assignee: Ryan Blue
> Fix For: 2.0.0
>
>
> Writing into partitioned Hive tables has unexpected results because the
> table's partitioning is not detected and applied during the analysis phase.
> For example, if I have two tables, {{source}} and {{partitioned}}, with the
> same column types:
> {code}
> CREATE TABLE source (id bigint, data string, part string);
> CREATE TABLE partitioned (id bigint, data string) PARTITIONED BY (part
> string);
> // copy from source to partitioned
> sqlContext.table("source").write.insertInto("partitioned")
> {code}
> Copying from {{source}} to {{partitioned}} succeeds, but results in 0 rows.
> This works if I explicitly partition by adding
> {{...write.partitionBy("part").insertInto(...)}}. This work-around isn't
> obvious and is prone to error because the {{partitionBy}} must match the
> table's partitioning, though it is not checked.
> I think when relations are resolved, the partitioning should be checked and
> updated if it isn't set.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]