[ https://issues.apache.org/jira/browse/HIVE-8867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14215149#comment-14215149 ]
Hive QA commented on HIVE-8867: ------------------------------- {color:red}Overall{color}: -1 at least one tests failed Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12681965/HIVE-8867.1-spark.patch {color:red}ERROR:{color} -1 due to 3 failed/errored test(s), 7234 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_sample_islocalmode_hook org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_optimize_nullscan org.apache.hadoop.hive.ql.exec.spark.TestHiveKVResultCache.testResultList {noformat} Test results: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-SPARK-Build/382/testReport Console output: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-SPARK-Build/382/console Test logs: http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-SPARK-Build-382/ Messages: {noformat} Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 3 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12681965 - PreCommit-HIVE-SPARK-Build > Investigate test failure on mapjoin_filter_on_outerjoin.q [Spark Branch] > ------------------------------------------------------------------------ > > Key: HIVE-8867 > URL: https://issues.apache.org/jira/browse/HIVE-8867 > Project: Hive > Issue Type: Sub-task > Components: Spark > Affects Versions: spark-branch > Reporter: Chao > Assignee: Chao > Attachments: HIVE-8867.1-spark.patch > > > If we run the test, it fails with the following exception: > {noformat} > java.lang.RuntimeException: Map operator initialization failed > at > org.apache.hadoop.hive.ql.exec.spark.SparkMapRecordHandler.init(SparkMapRecordHandler.java:136) > at > org.apache.hadoop.hive.ql.exec.spark.HiveMapFunction.call(HiveMapFunction.java:54) > at > org.apache.hadoop.hive.ql.exec.spark.HiveMapFunction.call(HiveMapFunction.java:29) > at > org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$7$1.apply(JavaRDDLike.scala:167) > at > org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$7$1.apply(JavaRDDLike.scala:167) > at org.apache.spark.rdd.RDD$$anonfun$13.apply(RDD.scala:599) > at org.apache.spark.rdd.RDD$$anonfun$13.apply(RDD.scala:599) > at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35) > at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262) > at org.apache.spark.rdd.RDD.iterator(RDD.scala:229) > at org.apache.spark.rdd.UnionRDD.compute(UnionRDD.scala:87) > at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262) > at org.apache.spark.rdd.RDD.iterator(RDD.scala:229) > at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:61) > at org.apache.spark.scheduler.Task.run(Task.scala:56) > at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:186) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > at java.lang.Thread.run(Thread.java:745) > Caused by: java.lang.NullPointerException > at java.util.ArrayList.<init>(ArrayList.java:164) > at > org.apache.hadoop.hive.ql.exec.HashTableSinkOperator.initializeOp(HashTableSinkOperator.java:163) > at org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:385) > at > org.apache.hadoop.hive.ql.exec.SparkHashTableSinkOperator.initializeOp(SparkHashTableSinkOperator.java:59) > at org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:385) > at org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:469) > at > org.apache.hadoop.hive.ql.exec.Operator.initializeChildren(Operator.java:425) > at > org.apache.hadoop.hive.ql.exec.FilterOperator.initializeOp(FilterOperator.java:66) > at org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:385) > at org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:469) > at > org.apache.hadoop.hive.ql.exec.Operator.initializeChildren(Operator.java:425) > at > org.apache.hadoop.hive.ql.exec.TableScanOperator.initializeOp(TableScanOperator.java:193) > at org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:385) > at > org.apache.hadoop.hive.ql.exec.MapOperator.initializeOp(MapOperator.java:427) > at org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:385) > at > org.apache.hadoop.hive.ql.exec.spark.SparkMapRecordHandler.init(SparkMapRecordHandler.java:112) > ... 18 more > {noformat} > We need to find out why. -- This message was sent by Atlassian JIRA (v6.3.4#6332)