Mahesh Raju Somalaraju created CARBONDATA-4254:
--------------------------------------------------

             Summary: Fix random ci failures in alter add and SI
                 Key: CARBONDATA-4254
                 URL: https://issues.apache.org/jira/browse/CARBONDATA-4254
             Project: CarbonData
          Issue Type: Bug
            Reporter: Mahesh Raju Somalaraju


Fix random ci failures in alter add and SI

Below test cases currently ignoring as they are failing randomly.
 * 
[org.apache.carbondata.spark.testsuite.mergeindex.CarbonIndexFileMergeTestCaseWithSI.Verify
 command of index 
merge|http://121.244.95.60:12602/job/ApacheCarbon_PR_Builder_3.1/org.apache.carbondata$carbondata-secondary-index/141/testReport/junit/org.apache.carbondata.spark.testsuite.mergeindex/CarbonIndexFileMergeTestCaseWithSI/Verify_command_of_index_merge/]
 * 
[org.apache.carbondata.spark.testsuite.alterTable.TestAlterTableAddColumns.Test 
alter add for structs enabling local 
dictionary|http://121.244.95.60:12602/job/ApacheCarbon_PR_Builder_3.1/org.apache.carbondata$carbondata-spark_3.1/141/testReport/junit/org.apache.carbondata.spark.testsuite.alterTable/TestAlterTableAddColumns/Test_alter_add_for_structs_enabling_local_dictionary/]

 

 

 
h3. Error Message

Job aborted due to stage failure: Task 1 in stage 1941.0 failed 1 times, most 
recent failure: Lost task 1.0 in stage 1941.0 (TID 71657) (ubuntu executor 
driver): java.lang.RuntimeException: Failed to merge index files in path: 
/opt/jenkins/workspace/ApacheCarbon_PR_Builder_3.1/integration/spark/target/warehouse/nonindexmerge/Fact/Part0/Segment_1.
 Table status update with mergeIndex file has failed
 at 
org.apache.carbondata.core.writer.CarbonIndexFileMergeWriter.mergeCarbonIndexFilesOfSegment(CarbonIndexFileMergeWriter.java:120)

 at 
org.apache.carbondata.core.writer.CarbonIndexFileMergeWriter.mergeCarbonIndexFilesOfSegment(CarbonIndexFileMergeWriter.java:374)

 at 
org.apache.spark.rdd.CarbonMergeFilesRDD$$anon$2.<init>(CarbonMergeFilesRDD.scala:325)&#010;
 at 
org.apache.spark.rdd.CarbonMergeFilesRDD.internalCompute(CarbonMergeFilesRDD.scala:287)&#010;
 at org.apache.carbondata.spark.rdd.CarbonRDD.compute(CarbonRDD.scala:84)&#010; 
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373)&#010; at 
org.apache.spark.rdd.RDD.iterator(RDD.scala:337)&#010; at 
org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)&#010; at 
org.apache.spark.scheduler.Task.run(Task.scala:131)&#010; at 
org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:497)&#010;
 at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1439)&#010; at 
org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:500)&#010; at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)&#010;
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)&#010;
 at java.lang.Thread.run(Thread.java:748)&#010;Caused by: java.io.IOException: 
Table status update with mergeIndex file has failed&#010; at 
org.apache.carbondata.core.writer.CarbonIndexFileMergeWriter.writeMergeIndexFileBasedOnSegmentFile(CarbonIndexFileMergeWriter.java:315)&#010;
 at 
org.apache.carbondata.core.writer.CarbonIndexFileMergeWriter.mergeCarbonIndexFilesOfSegment(CarbonIndexFileMergeWriter.java:112)&#010;
 ... 14 more&#010;&#010;Driver stacktrace:
h3. Stacktrace

org.apache.spark.SparkException: Job aborted due to stage failure: Task 1 in 
stage 1941.0 failed 1 times, most recent failure: Lost task 1.0 in stage 1941.0 
(TID 71657) (ubuntu executor driver): java.lang.RuntimeException: Failed to 
merge index files in path: 
/opt/jenkins/workspace/ApacheCarbon_PR_Builder_3.1/integration/spark/target/warehouse/nonindexmerge/Fact/Part0/Segment_1.
 Table status update with mergeIndex file has failed at 
org.apache.carbondata.core.writer.CarbonIndexFileMergeWriter.mergeCarbonIndexFilesOfSegment(CarbonIndexFileMergeWriter.java:120)
 at 
org.apache.carbondata.core.writer.CarbonIndexFileMergeWriter.mergeCarbonIndexFilesOfSegment(CarbonIndexFileMergeWriter.java:374)
 at 
org.apache.spark.rdd.CarbonMergeFilesRDD$$anon$2.<init>(CarbonMergeFilesRDD.scala:325)
 at 
org.apache.spark.rdd.CarbonMergeFilesRDD.internalCompute(CarbonMergeFilesRDD.scala:287)
 at org.apache.carbondata.spark.rdd.CarbonRDD.compute(CarbonRDD.scala:84) at 
org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373) at 
org.apache.spark.rdd.RDD.iterator(RDD.scala:337) at 
org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90) at 
org.apache.spark.scheduler.Task.run(Task.scala:131) at 
org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:497)
 at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1439) at 
org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:500) at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
at java.lang.Thread.run(Thread.java:748) Caused by: java.io.IOException: Table 
status update with mergeIndex file has failed at 
org.apache.carbondata.core.writer.CarbonIndexFileMergeWriter.writeMergeIndexFileBasedOnSegmentFile(CarbonIndexFileMergeWriter.java:315)
 at 
org.apache.carbondata.core.writer.CarbonIndexFileMergeWriter.mergeCarbonIndexFilesOfSegment(CarbonIndexFileMergeWriter.java:112)
 ... 14 more

 

 

 
h3. Error Message

Job aborted due to stage failure: Task 0 in stage 7648.0 failed 1 times, most 
recent failure: Lost task 0.0 in stage 7648.0 (TID 107156) (ubuntu executor 
driver): java.lang.ArrayIndexOutOfBoundsException: 2&#010; at 
org.apache.spark.sql.catalyst.expressions.GenericInternalRow.genericGet(rows.scala:201)&#010;
 at 
org.apache.spark.sql.catalyst.expressions.BaseGenericInternalRow.getAs(rows.scala:35)&#010;
 at 
org.apache.spark.sql.catalyst.expressions.BaseGenericInternalRow.isNullAt(rows.scala:36)&#010;
 at 
org.apache.spark.sql.catalyst.expressions.BaseGenericInternalRow.isNullAt$(rows.scala:36)&#010;
 at 
org.apache.spark.sql.catalyst.expressions.GenericInternalRow.isNullAt(rows.scala:195)&#010;
 at 
org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection.writeFields_0_1$(Unknown
 Source)&#010; at 
org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection.apply(Unknown
 Source)&#010; at 
org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection.apply(Unknown
 Source)&#010; at 
scala.collection.Iterator$$anon$10.next(Iterator.scala:459)&#010; at 
scala.collection.Iterator$$anon$10.next(Iterator.scala:459)&#010; at 
org.apache.spark.sql.execution.SparkPlan.$anonfun$getByteArrayRdd$1(SparkPlan.scala:346)&#010;
 at 
org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:898)&#010; 
at 
org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:898)&#010;
 at 
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)&#010; 
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373)&#010; at 
org.apache.spark.rdd.RDD.iterator(RDD.scala:337)&#010; at 
org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)&#010; at 
org.apache.spark.scheduler.Task.run(Task.scala:131)&#010; at 
org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:497)&#010;
 at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1439)&#010; at 
org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:500)&#010; at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)&#010;
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)&#010;
 at java.lang.Thread.run(Thread.java:748)&#010;&#010;Driver stacktrace:
h3. Stacktrace

org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in 
stage 7648.0 failed 1 times, most recent failure: Lost task 0.0 in stage 7648.0 
(TID 107156) (ubuntu executor driver): 
java.lang.ArrayIndexOutOfBoundsException: 2 at 
org.apache.spark.sql.catalyst.expressions.GenericInternalRow.genericGet(rows.scala:201)
 at 
org.apache.spark.sql.catalyst.expressions.BaseGenericInternalRow.getAs(rows.scala:35)
 at 
org.apache.spark.sql.catalyst.expressions.BaseGenericInternalRow.isNullAt(rows.scala:36)
 at 
org.apache.spark.sql.catalyst.expressions.BaseGenericInternalRow.isNullAt$(rows.scala:36)
 at 
org.apache.spark.sql.catalyst.expressions.GenericInternalRow.isNullAt(rows.scala:195)
 at 
org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection.writeFields_0_1$(Unknown
 Source) at 
org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection.apply(Unknown
 Source) at 
org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection.apply(Unknown
 Source) at scala.collection.Iterator$$anon$10.next(Iterator.scala:459) at 
scala.collection.Iterator$$anon$10.next(Iterator.scala:459) at 
org.apache.spark.sql.execution.SparkPlan.$anonfun$getByteArrayRdd$1(SparkPlan.scala:346)
 at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:898) at 
org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:898)
 at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52) at 
org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373) at 
org.apache.spark.rdd.RDD.iterator(RDD.scala:337) at 
org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90) at 
org.apache.spark.scheduler.Task.run(Task.scala:131) at 
org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:497)
 at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1439) at 
org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:500) at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
at java.lang.Thread.run(Thread.java:748)

 

 

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to