[ 
https://issues.apache.org/jira/browse/SPARK-19775?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dongjoon Hyun updated SPARK-19775:
----------------------------------
    Description: 
This issue removes [a test 
case|https://github.com/apache/spark/blame/master/sql/hive/src/test/scala/org/apache/spark/sql/hive/InsertIntoHiveTableSuite.scala#L287-L298]
 which was introduced by 
[SPARK-14459|https://github.com/apache/spark/commit/652bbb1bf62722b08a062c7a2bf72019f85e179e]
 and was superseded by 
[SPARK-16033|https://github.com/apache/spark/blame/master/sql/hive/src/test/scala/org/apache/spark/sql/hive/InsertIntoHiveTableSuite.scala#L365-L371].
 Basically, we cannot use `partitionBy` and `insertInto` together.

{code}
  test("Reject partitioning that does not match table") {
    withSQLConf(("hive.exec.dynamic.partition.mode", "nonstrict")) {
      sql("CREATE TABLE partitioned (id bigint, data string) PARTITIONED BY 
(part string)")
      val data = (1 to 10).map(i => (i, s"data-$i", if ((i % 2) == 0) "even" 
else "odd"))
          .toDF("id", "data", "part")

      intercept[AnalysisException] {
        // cannot partition by 2 fields when there is only one in the table 
definition
        data.write.partitionBy("part", "data").insertInto("partitioned")
      }
    }
  }
{code}


  was:
This issue removes [a test 
case|https://github.com/apache/spark/blame/master/sql/hive/src/test/scala/org/apache/spark/sql/hive/InsertIntoHiveTableSuite.scala#L287-L298]
 which was introduced by 
[SPARK-14459|https://github.com/apache/spark/commit/10b671447bc04af250cbd8a7ea86f2769147a78a]
 and was superseded by 
[SPARK-16033|https://github.com/apache/spark/blame/master/sql/hive/src/test/scala/org/apache/spark/sql/hive/InsertIntoHiveTableSuite.scala#L365-L371].
 Basically, we cannot use `partitionBy` and `insertInto` together.

{code}
  test("Reject partitioning that does not match table") {
    withSQLConf(("hive.exec.dynamic.partition.mode", "nonstrict")) {
      sql("CREATE TABLE partitioned (id bigint, data string) PARTITIONED BY 
(part string)")
      val data = (1 to 10).map(i => (i, s"data-$i", if ((i % 2) == 0) "even" 
else "odd"))
          .toDF("id", "data", "part")

      intercept[AnalysisException] {
        // cannot partition by 2 fields when there is only one in the table 
definition
        data.write.partitionBy("part", "data").insertInto("partitioned")
      }
    }
  }
{code}



> Remove an obsolete `partitionBy().insertInto()` test case
> ---------------------------------------------------------
>
>                 Key: SPARK-19775
>                 URL: https://issues.apache.org/jira/browse/SPARK-19775
>             Project: Spark
>          Issue Type: Bug
>          Components: SQL, Tests
>    Affects Versions: 2.1.0
>            Reporter: Dongjoon Hyun
>            Priority: Trivial
>
> This issue removes [a test 
> case|https://github.com/apache/spark/blame/master/sql/hive/src/test/scala/org/apache/spark/sql/hive/InsertIntoHiveTableSuite.scala#L287-L298]
>  which was introduced by 
> [SPARK-14459|https://github.com/apache/spark/commit/652bbb1bf62722b08a062c7a2bf72019f85e179e]
>  and was superseded by 
> [SPARK-16033|https://github.com/apache/spark/blame/master/sql/hive/src/test/scala/org/apache/spark/sql/hive/InsertIntoHiveTableSuite.scala#L365-L371].
>  Basically, we cannot use `partitionBy` and `insertInto` together.
> {code}
>   test("Reject partitioning that does not match table") {
>     withSQLConf(("hive.exec.dynamic.partition.mode", "nonstrict")) {
>       sql("CREATE TABLE partitioned (id bigint, data string) PARTITIONED BY 
> (part string)")
>       val data = (1 to 10).map(i => (i, s"data-$i", if ((i % 2) == 0) "even" 
> else "odd"))
>           .toDF("id", "data", "part")
>       intercept[AnalysisException] {
>         // cannot partition by 2 fields when there is only one in the table 
> definition
>         data.write.partitionBy("part", "data").insertInto("partitioned")
>       }
>     }
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to