GitHub user KaiXinXiaoLei opened a pull request:

    https://github.com/apache/spark/pull/5150

    The UT test of spark is failed.

    If the tests in 
"sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/SQLQuerySuite.scala"
 are  running before CachedTableSuite.scala, the test("Drop cached table") will 
failed. Because the table test is created in SQLQuerySuite.scala  ,and this 
table not droped. So when running "drop cached table", table test already 
exists.
    
    There is error info:
    01:18:35.738 ERROR hive.ql.exec.DDLTask: 
org.apache.hadoop.hive.ql.metadata.HiveException: 
AlreadyExistsException(message:Table test already exists)
    at org.apache.hadoop.hive.ql.metadata.Hive.createTable(Hive.java:616)
    at org.apache.hadoop.hive.ql.exec.DDLTask.createTable(DDLTask.java:4189)
    at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:281)
    at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:153)
    at 
org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:85)
    at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1503)
    at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1270)
    at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1088)
    at org.apache.hadoop.hive.ql.Driver.run(Driver.java:911)
    at org.apache.hadoop.hive.ql.Driver.run(Driver.java:901)test”
    
    
    And the test about "create table test" in 
"sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/SQLQuerySuite.scala,is:
 
      test("SPARK-4825 save join to table") {
        val testData = sparkContext.parallelize(1 to 10).map(i => TestData(i, 
i.toString)).toDF()
        sql("CREATE TABLE test1 (key INT, value STRING)")
        testData.insertInto("test1")
        sql("CREATE TABLE test2 (key INT, value STRING)")
        testData.insertInto("test2")
        testData.insertInto("test2")
        sql("CREATE TABLE test AS SELECT COUNT(a.value) FROM test1 a JOIN test2 
b ON a.key = b.key")
        checkAnswer(
          table("test"),
          sql("SELECT COUNT(a.value) FROM test1 a JOIN test2 b ON a.key = 
b.key").collect().toSeq)
      }

You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/KaiXinXiaoLei/spark testFailed

Alternatively you can review and apply these changes as the patch at:

    https://github.com/apache/spark/pull/5150.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #5150
    
----
commit 7534b021c9fdacdcd139d5d56c2a24c92f6b43eb
Author: KaiXinXiaoLei <[email protected]>
Date:   2015-03-24T02:35:03Z

    The UT test of spark is failed. 
    
    Because there is a test in SQLQuerySuite about creating table “test”

----


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to