See <https://builds.apache.org/job/Phoenix-master/715/changes>

Changes:

[samarth.jain] PHOENIX-1912 DropIndexDuringUpsertIT is hanging causing build to 
fail

------------------------------------------
[...truncated 1803 lines...]
[INFO] --- scala-maven-plugin:3.2.0:testCompile (scala-test-compile) @ 
phoenix-spark ---
[WARNING]  Expected all dependencies to require Scala version: 2.10.4
[WARNING]  org.apache.phoenix:phoenix-spark:4.4.0-SNAPSHOT requires scala 
version: 2.10.4
[WARNING]  org.scalatest:scalatest_2.10:2.2.4 requires scala version: 2.10.4
[WARNING]  org.scalamock:scalamock-scalatest-support_2.10:3.1.4 requires scala 
version: 2.10.4
[WARNING]  org.scalamock:scalamock-core_2.10:3.1.4 requires scala version: 
2.10.4
[WARNING]  com.twitter:chill_2.10:0.5.0 requires scala version: 2.10.4
[WARNING]  org.spark-project.akka:akka-remote_2.10:2.3.4-spark requires scala 
version: 2.10.4
[WARNING]  org.spark-project.akka:akka-actor_2.10:2.3.4-spark requires scala 
version: 2.10.4
[WARNING]  org.spark-project.akka:akka-slf4j_2.10:2.3.4-spark requires scala 
version: 2.10.4
[WARNING]  org.apache.spark:spark-core_2.10:1.3.0 requires scala version: 2.10.4
[WARNING]  org.json4s:json4s-jackson_2.10:3.2.10 requires scala version: 2.10.0
[WARNING] Multiple versions of scala libraries detected!
[INFO] 
<https://builds.apache.org/job/Phoenix-master/ws/phoenix-spark/src/it/scala>:-1:
 info: compiling
[INFO] Compiling 1 source files to 
<https://builds.apache.org/job/Phoenix-master/ws/phoenix-spark/target/test-classes>
 at 1429846482472
[WARNING] warning: Class org.joda.convert.FromString not found - continuing 
with a stub.
[WARNING] warning: Class org.joda.convert.ToString not found - continuing with 
a stub.
[WARNING] warning: Class org.joda.convert.ToString not found - continuing with 
a stub.
[WARNING] warning: Class org.joda.convert.FromString not found - continuing 
with a stub.
[WARNING] warning: Class org.joda.convert.ToString not found - continuing with 
a stub.
[WARNING] warning: Class org.joda.convert.FromString not found - continuing 
with a stub.
[WARNING] warning: Class org.joda.convert.FromString not found - continuing 
with a stub.
[WARNING] warning: Class org.apache.hadoop.mapred.MiniMRCluster not found - 
continuing with a stub.
[WARNING] 8 warnings found
[INFO] prepare-compile in 0 s
[INFO] compile in 10 s
[INFO] 
[INFO] --- maven-compiler-plugin:3.0:testCompile (default-testCompile) @ 
phoenix-spark ---
[INFO] Changes detected - recompiling the module!
[INFO] 
[INFO] --- maven-surefire-plugin:2.18:test (default-test) @ phoenix-spark ---
[INFO] 
[INFO] --- scalatest-maven-plugin:1.0:test (test) @ phoenix-spark ---
[INFO] Tests are skipped.
[INFO] 
[INFO] --- maven-source-plugin:2.2.1:jar-no-fork (attach-sources) @ 
phoenix-spark ---
[INFO] Building jar: 
<https://builds.apache.org/job/Phoenix-master/715/artifact/phoenix-spark/target/phoenix-spark-4.4.0-SNAPSHOT-sources.jar>
[INFO] 
[INFO] --- maven-jar-plugin:2.4:test-jar (default) @ phoenix-spark ---
[INFO] Building jar: 
<https://builds.apache.org/job/Phoenix-master/715/artifact/phoenix-spark/target/phoenix-spark-4.4.0-SNAPSHOT-tests.jar>
[INFO] 
[INFO] --- maven-jar-plugin:2.4:jar (default-jar) @ phoenix-spark ---
[INFO] Building jar: 
<https://builds.apache.org/job/Phoenix-master/715/artifact/phoenix-spark/target/phoenix-spark-4.4.0-SNAPSHOT.jar>
[INFO] 
[INFO] --- maven-site-plugin:3.2:attach-descriptor (attach-descriptor) @ 
phoenix-spark ---
[INFO] 
[INFO] --- scalatest-maven-plugin:1.0:test (integration-test) @ phoenix-spark 
---
WARNING: -c has been deprecated and will be reused for a different (but still 
very cool) purpose in ScalaTest 2.0. Please change all uses of -c to -P.
Discovery starting.
Discovery completed in 302 milliseconds.
Run starting. Expected test count is: 17
PhoenixSparkIT:
Formatting using clusterid: testClusterID
0    [sparkDriver-akka.actor.default-dispatcher-3] INFO  Remoting  - Starting 
remoting
177  [sparkDriver-akka.actor.default-dispatcher-3] INFO  Remoting  - Remoting 
started; listening on addresses 
:[akka.tcp://sparkdri...@pietas.apache.org:36266]
- Can create valid SQL
- Can convert Phoenix schema
- Can create schema RDD and execute query *** FAILED ***
  org.apache.phoenix.schema.TableNotFoundException: ERROR 1012 (42M03): 
Table undefined. tableName="TABLE1"
  at 
org.apache.phoenix.schema.PMetaDataImpl.getTable(PMetaDataImpl.java:241)
  at 
org.apache.phoenix.util.PhoenixRuntime.getTable(PhoenixRuntime.java:323)
  at 
org.apache.phoenix.util.PhoenixRuntime.generateColumnInfo(PhoenixRuntime.java:350)
  at 
org.apache.phoenix.mapreduce.util.PhoenixConfigurationUtil.getSelectColumnMetadataList(PhoenixConfigurationUtil.java:263)
  at 
org.apache.phoenix.spark.PhoenixRDD.toDataFrame(PhoenixRDD.scala:109)
  at 
org.apache.phoenix.spark.SparkSqlContextFunctions.phoenixTableAsDataFrame(SparkSqlContextFunctions.scala:37)
  at 
org.apache.phoenix.spark.PhoenixSparkIT$$anonfun$3.apply$mcV$sp(PhoenixSparkIT.scala:134)
  at 
org.apache.phoenix.spark.PhoenixSparkIT$$anonfun$3.apply(PhoenixSparkIT.scala:131)
  at 
org.apache.phoenix.spark.PhoenixSparkIT$$anonfun$3.apply(PhoenixSparkIT.scala:131)
  at 
org.scalatest.Transformer$$anonfun$apply$1.apply$mcV$sp(Transformer.scala:22)
  ...
- Can create schema RDD and execute query on case sensitive table (no 
config) *** FAILED ***
  org.apache.phoenix.schema.TableNotFoundException: ERROR 1012 (42M03): 
Table undefined. tableName="table3"
  at 
org.apache.phoenix.schema.PMetaDataImpl.getTable(PMetaDataImpl.java:241)
  at 
org.apache.phoenix.util.PhoenixRuntime.getTable(PhoenixRuntime.java:323)
  at 
org.apache.phoenix.util.PhoenixRuntime.generateColumnInfo(PhoenixRuntime.java:350)
  at 
org.apache.phoenix.mapreduce.util.PhoenixConfigurationUtil.getSelectColumnMetadataList(PhoenixConfigurationUtil.java:263)
  at 
org.apache.phoenix.spark.PhoenixRDD.toDataFrame(PhoenixRDD.scala:109)
  at 
org.apache.phoenix.spark.SparkSqlContextFunctions.phoenixTableAsDataFrame(SparkSqlContextFunctions.scala:37)
  at 
org.apache.phoenix.spark.PhoenixSparkIT$$anonfun$4.apply$mcV$sp(PhoenixSparkIT.scala:157)
  at 
org.apache.phoenix.spark.PhoenixSparkIT$$anonfun$4.apply(PhoenixSparkIT.scala:153)
  at 
org.apache.phoenix.spark.PhoenixSparkIT$$anonfun$4.apply(PhoenixSparkIT.scala:153)
  at 
org.scalatest.Transformer$$anonfun$apply$1.apply$mcV$sp(Transformer.scala:22)
  ...
- Can create schema RDD and execute constrained query *** FAILED ***
  org.apache.phoenix.schema.TableNotFoundException: ERROR 1012 (42M03): 
Table undefined. tableName="TABLE1"
  at 
org.apache.phoenix.schema.PMetaDataImpl.getTable(PMetaDataImpl.java:241)
  at 
org.apache.phoenix.util.PhoenixRuntime.getTable(PhoenixRuntime.java:323)
  at 
org.apache.phoenix.util.PhoenixRuntime.generateColumnInfo(PhoenixRuntime.java:350)
  at 
org.apache.phoenix.mapreduce.util.PhoenixConfigurationUtil.getSelectColumnMetadataList(PhoenixConfigurationUtil.java:263)
  at 
org.apache.phoenix.spark.PhoenixRDD.toDataFrame(PhoenixRDD.scala:109)
  at 
org.apache.phoenix.spark.SparkSqlContextFunctions.phoenixTableAsDataFrame(SparkSqlContextFunctions.scala:37)
  at 
org.apache.phoenix.spark.PhoenixSparkIT$$anonfun$5.apply$mcV$sp(PhoenixSparkIT.scala:172)
  at 
org.apache.phoenix.spark.PhoenixSparkIT$$anonfun$5.apply(PhoenixSparkIT.scala:169)
  at 
org.apache.phoenix.spark.PhoenixSparkIT$$anonfun$5.apply(PhoenixSparkIT.scala:169)
  at 
org.scalatest.Transformer$$anonfun$apply$1.apply$mcV$sp(Transformer.scala:22)
  ...
- Using a predicate referring to a non-existent column should fail *** 
FAILED ***
  Expected exception java.lang.RuntimeException to be thrown, but 
org.apache.phoenix.schema.TableNotFoundException was thrown. 
(PhoenixSparkIT.scala:194)
- Can create schema RDD with predicate that will never match *** FAILED 
***
  org.apache.phoenix.schema.TableNotFoundException: ERROR 1012 (42M03): 
Table undefined. tableName="table3"
  at 
org.apache.phoenix.schema.PMetaDataImpl.getTable(PMetaDataImpl.java:241)
  at 
org.apache.phoenix.util.PhoenixRuntime.getTable(PhoenixRuntime.java:323)
  at 
org.apache.phoenix.util.PhoenixRuntime.generateColumnInfo(PhoenixRuntime.java:350)
  at 
org.apache.phoenix.mapreduce.util.PhoenixConfigurationUtil.getSelectColumnMetadataList(PhoenixConfigurationUtil.java:263)
  at 
org.apache.phoenix.spark.PhoenixRDD.toDataFrame(PhoenixRDD.scala:109)
  at 
org.apache.phoenix.spark.SparkSqlContextFunctions.phoenixTableAsDataFrame(SparkSqlContextFunctions.scala:37)
  at 
org.apache.phoenix.spark.PhoenixSparkIT$$anonfun$7.apply$mcV$sp(PhoenixSparkIT.scala:213)
  at 
org.apache.phoenix.spark.PhoenixSparkIT$$anonfun$7.apply(PhoenixSparkIT.scala:210)
  at 
org.apache.phoenix.spark.PhoenixSparkIT$$anonfun$7.apply(PhoenixSparkIT.scala:210)
  at 
org.scalatest.Transformer$$anonfun$apply$1.apply$mcV$sp(Transformer.scala:22)
  ...
- Can create schema RDD with complex predicate *** FAILED ***
  org.apache.phoenix.schema.TableNotFoundException: ERROR 1012 (42M03): 
Table undefined. tableName="DATE_PREDICATE_TEST_TABLE"
  at 
org.apache.phoenix.schema.PMetaDataImpl.getTable(PMetaDataImpl.java:241)
  at 
org.apache.phoenix.util.PhoenixRuntime.getTable(PhoenixRuntime.java:323)
  at 
org.apache.phoenix.util.PhoenixRuntime.generateColumnInfo(PhoenixRuntime.java:350)
  at 
org.apache.phoenix.mapreduce.util.PhoenixConfigurationUtil.getSelectColumnMetadataList(PhoenixConfigurationUtil.java:263)
  at 
org.apache.phoenix.spark.PhoenixRDD.toDataFrame(PhoenixRDD.scala:109)
  at 
org.apache.phoenix.spark.SparkSqlContextFunctions.phoenixTableAsDataFrame(SparkSqlContextFunctions.scala:37)
  at 
org.apache.phoenix.spark.PhoenixSparkIT$$anonfun$8.apply$mcV$sp(PhoenixSparkIT.scala:229)
  at 
org.apache.phoenix.spark.PhoenixSparkIT$$anonfun$8.apply(PhoenixSparkIT.scala:226)
  at 
org.apache.phoenix.spark.PhoenixSparkIT$$anonfun$8.apply(PhoenixSparkIT.scala:226)
  at 
org.scalatest.Transformer$$anonfun$apply$1.apply$mcV$sp(Transformer.scala:22)
  ...
- Can query an array table *** FAILED ***
  org.apache.phoenix.schema.TableNotFoundException: ERROR 1012 (42M03): 
Table undefined. tableName="ARRAY_TEST_TABLE"
  at 
org.apache.phoenix.schema.PMetaDataImpl.getTable(PMetaDataImpl.java:241)
  at 
org.apache.phoenix.util.PhoenixRuntime.getTable(PhoenixRuntime.java:323)
  at 
org.apache.phoenix.util.PhoenixRuntime.generateColumnInfo(PhoenixRuntime.java:350)
  at 
org.apache.phoenix.mapreduce.util.PhoenixConfigurationUtil.getSelectColumnMetadataList(PhoenixConfigurationUtil.java:263)
  at 
org.apache.phoenix.spark.PhoenixRDD.toDataFrame(PhoenixRDD.scala:109)
  at 
org.apache.phoenix.spark.SparkSqlContextFunctions.phoenixTableAsDataFrame(SparkSqlContextFunctions.scala:37)
  at 
org.apache.phoenix.spark.PhoenixSparkIT$$anonfun$9.apply$mcV$sp(PhoenixSparkIT.scala:250)
  at 
org.apache.phoenix.spark.PhoenixSparkIT$$anonfun$9.apply(PhoenixSparkIT.scala:247)
  at 
org.apache.phoenix.spark.PhoenixSparkIT$$anonfun$9.apply(PhoenixSparkIT.scala:247)
  at 
org.scalatest.Transformer$$anonfun$apply$1.apply$mcV$sp(Transformer.scala:22)
  ...
- Can read a table as an RDD
- Can save to phoenix table
- Can save Java and Joda dates to Phoenix (no config)
- Can infer schema without defining columns *** FAILED ***
  org.apache.phoenix.schema.TableNotFoundException: ERROR 1012 (42M03): 
Table undefined. tableName="TABLE2"
  at 
org.apache.phoenix.schema.PMetaDataImpl.getTable(PMetaDataImpl.java:241)
  at 
org.apache.phoenix.util.PhoenixRuntime.getTable(PhoenixRuntime.java:323)
  at 
org.apache.phoenix.util.PhoenixRuntime.generateColumnInfo(PhoenixRuntime.java:350)
  at 
org.apache.phoenix.mapreduce.util.PhoenixConfigurationUtil.getSelectColumnMetadataList(PhoenixConfigurationUtil.java:263)
  at 
org.apache.phoenix.spark.PhoenixRDD.toDataFrame(PhoenixRDD.scala:109)
  at 
org.apache.phoenix.spark.SparkSqlContextFunctions.phoenixTableAsDataFrame(SparkSqlContextFunctions.scala:37)
  at 
org.apache.phoenix.spark.PhoenixSparkIT$$anonfun$13.apply$mcV$sp(PhoenixSparkIT.scala:335)
  at 
org.apache.phoenix.spark.PhoenixSparkIT$$anonfun$13.apply(PhoenixSparkIT.scala:333)
  at 
org.apache.phoenix.spark.PhoenixSparkIT$$anonfun$13.apply(PhoenixSparkIT.scala:333)
  at 
org.scalatest.Transformer$$anonfun$apply$1.apply$mcV$sp(Transformer.scala:22)
  ...
- Spark SQL can use Phoenix as a data source with no schema specified *** 
FAILED ***
  org.apache.phoenix.schema.TableNotFoundException: ERROR 1012 (42M03): 
Table undefined. tableName="TABLE1"
  at 
org.apache.phoenix.schema.PMetaDataImpl.getTable(PMetaDataImpl.java:241)
  at 
org.apache.phoenix.util.PhoenixRuntime.getTable(PhoenixRuntime.java:323)
  at 
org.apache.phoenix.util.PhoenixRuntime.generateColumnInfo(PhoenixRuntime.java:350)
  at 
org.apache.phoenix.mapreduce.util.PhoenixConfigurationUtil.getSelectColumnMetadataList(PhoenixConfigurationUtil.java:263)
  at 
org.apache.phoenix.spark.PhoenixRDD.toDataFrame(PhoenixRDD.scala:109)
  at 
org.apache.phoenix.spark.PhoenixRelation.schema(PhoenixRelation.scala:40)
  at 
org.apache.spark.sql.sources.LogicalRelation.<init>(LogicalRelation.scala:30)
  at org.apache.spark.sql.SQLContext.load(SQLContext.scala:680)
  at 
org.apache.phoenix.spark.PhoenixSparkIT$$anonfun$14.apply$mcV$sp(PhoenixSparkIT.scala:343)
  at 
org.apache.phoenix.spark.PhoenixSparkIT$$anonfun$14.apply(PhoenixSparkIT.scala:341)
  ...
- Spark SQL can use Phoenix as a data source with PrunedFilteredScan *** 
FAILED ***
  org.apache.phoenix.schema.TableNotFoundException: ERROR 1012 (42M03): 
Table undefined. tableName="TABLE1"
  at 
org.apache.phoenix.schema.PMetaDataImpl.getTable(PMetaDataImpl.java:241)
  at 
org.apache.phoenix.util.PhoenixRuntime.getTable(PhoenixRuntime.java:323)
  at 
org.apache.phoenix.util.PhoenixRuntime.generateColumnInfo(PhoenixRuntime.java:350)
  at 
org.apache.phoenix.mapreduce.util.PhoenixConfigurationUtil.getSelectColumnMetadataList(PhoenixConfigurationUtil.java:263)
  at 
org.apache.phoenix.spark.PhoenixRDD.toDataFrame(PhoenixRDD.scala:109)
  at 
org.apache.phoenix.spark.PhoenixRelation.schema(PhoenixRelation.scala:40)
  at 
org.apache.spark.sql.sources.LogicalRelation.<init>(LogicalRelation.scala:30)
  at org.apache.spark.sql.SQLContext.load(SQLContext.scala:680)
  at 
org.apache.phoenix.spark.PhoenixSparkIT$$anonfun$15.apply$mcV$sp(PhoenixSparkIT.scala:352)
  at 
org.apache.phoenix.spark.PhoenixSparkIT$$anonfun$15.apply(PhoenixSparkIT.scala:350)
  ...
- Can persist a dataframe using 'DataFrame.saveToPhoenix' *** FAILED 
***
  org.apache.phoenix.schema.TableNotFoundException: ERROR 1012 (42M03): 
Table undefined. tableName="TABLE1"
  at 
org.apache.phoenix.schema.PMetaDataImpl.getTable(PMetaDataImpl.java:241)
  at 
org.apache.phoenix.util.PhoenixRuntime.getTable(PhoenixRuntime.java:323)
  at 
org.apache.phoenix.util.PhoenixRuntime.generateColumnInfo(PhoenixRuntime.java:350)
  at 
org.apache.phoenix.mapreduce.util.PhoenixConfigurationUtil.getSelectColumnMetadataList(PhoenixConfigurationUtil.java:263)
  at 
org.apache.phoenix.spark.PhoenixRDD.toDataFrame(PhoenixRDD.scala:109)
  at 
org.apache.phoenix.spark.PhoenixRelation.schema(PhoenixRelation.scala:40)
  at 
org.apache.spark.sql.sources.LogicalRelation.<init>(LogicalRelation.scala:30)
  at org.apache.spark.sql.SQLContext.load(SQLContext.scala:680)
  at 
org.apache.phoenix.spark.PhoenixSparkIT$$anonfun$16.apply$mcV$sp(PhoenixSparkIT.scala:371)
  at 
org.apache.phoenix.spark.PhoenixSparkIT$$anonfun$16.apply(PhoenixSparkIT.scala:368)
  ...
- Can persist a dataframe using 'DataFrame.save() *** FAILED ***
  org.apache.phoenix.schema.TableNotFoundException: ERROR 1012 (42M03): 
Table undefined. tableName="TABLE1"
  at 
org.apache.phoenix.schema.PMetaDataImpl.getTable(PMetaDataImpl.java:241)
  at 
org.apache.phoenix.util.PhoenixRuntime.getTable(PhoenixRuntime.java:323)
  at 
org.apache.phoenix.util.PhoenixRuntime.generateColumnInfo(PhoenixRuntime.java:350)
  at 
org.apache.phoenix.mapreduce.util.PhoenixConfigurationUtil.getSelectColumnMetadataList(PhoenixConfigurationUtil.java:263)
  at 
org.apache.phoenix.spark.PhoenixRDD.toDataFrame(PhoenixRDD.scala:109)
  at 
org.apache.phoenix.spark.PhoenixRelation.schema(PhoenixRelation.scala:40)
  at 
org.apache.spark.sql.sources.LogicalRelation.<init>(LogicalRelation.scala:30)
  at org.apache.spark.sql.SQLContext.load(SQLContext.scala:680)
  at 
org.apache.phoenix.spark.PhoenixSparkIT$$anonfun$17.apply$mcV$sp(PhoenixSparkIT.scala:399)
  at 
org.apache.phoenix.spark.PhoenixSparkIT$$anonfun$17.apply(PhoenixSparkIT.scala:391)
  ...
Run completed in 37 seconds, 518 milliseconds.
Total number of tests run: 17
Suites: completed 2, aborted 0
Tests: succeeded 5, failed 12, canceled 0, ignored 0, pending 0
*** 12 TESTS FAILED ***
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Phoenix .................................... SUCCESS [1.759s]
[INFO] Phoenix Core ...................................... SUCCESS [22:48.077s]
[INFO] Phoenix - Flume ................................... SUCCESS [1:00.422s]
[INFO] Phoenix - Pig ..................................... SUCCESS [2:19.875s]
[INFO] Phoenix Query Server Client ....................... SUCCESS [0.708s]
[INFO] Phoenix Query Server .............................. SUCCESS [1:45.551s]
[INFO] Phoenix - Spark ................................... FAILURE [1:09.741s]
[INFO] Phoenix Assembly .................................. SKIPPED
[INFO] Phoenix - Pherf ................................... SKIPPED
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 29:06.642s
[INFO] Finished at: Fri Apr 24 03:35:40 UTC 2015
[INFO] Final Memory: 82M/1240M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.scalatest:scalatest-maven-plugin:1.0:test 
(integration-test) on project phoenix-spark: There are test failures -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :phoenix-spark
Build step 'Invoke top-level Maven targets' marked build as failure
Archiving artifacts
Sending artifact delta relative to Phoenix | Master #701
Archived 1089 artifacts
Archive block size is 32768
Received 18878 blocks and 486983577 bytes
Compression is 56.0%
Took 2 min 13 sec
Updating PHOENIX-1912
Recording test results

Reply via email to