[ 
https://issues.apache.org/jira/browse/SPARK-17948?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcelo Vanzin resolved SPARK-17948.
------------------------------------
    Resolution: Duplicate

>  WARN CodeGenerator: Error calculating stats of compiled class
> --------------------------------------------------------------
>
>                 Key: SPARK-17948
>                 URL: https://issues.apache.org/jira/browse/SPARK-17948
>             Project: Spark
>          Issue Type: Bug
>          Components: PySpark
>    Affects Versions: 2.0.1
>            Reporter: Harish
>
> I am getting below error in 2.0.2 snapshot. I use pyspark
> 16/10/14 22:33:25 WARN CodeGenerator: Error calculating stats of compiled 
> class.
> java.lang.IndexOutOfBoundsException: Index: 2659, Size: 1
>       at java.util.ArrayList.rangeCheck(ArrayList.java:653)
>       at java.util.ArrayList.get(ArrayList.java:429)
>       at 
> org.codehaus.janino.util.ClassFile.getConstantPoolInfo(ClassFile.java:457)
>       at 
> org.codehaus.janino.util.ClassFile.getConstantUtf8(ClassFile.java:469)
>       at org.codehaus.janino.util.ClassFile.loadAttribute(ClassFile.java:1387)
>       at org.codehaus.janino.util.ClassFile.loadAttributes(ClassFile.java:555)
>       at org.codehaus.janino.util.ClassFile.loadFields(ClassFile.java:518)
>       at org.codehaus.janino.util.ClassFile.<init>(ClassFile.java:185)
>       at 
> org.apache.spark.sql.catalyst.expressions.codegen.CodeGenerator$$anonfun$recordCompilationStats$1.apply(CodeGenerator.scala:919)
>       at 
> org.apache.spark.sql.catalyst.expressions.codegen.CodeGenerator$$anonfun$recordCompilationStats$1.apply(CodeGenerator.scala:916)
>       at scala.collection.Iterator$class.foreach(Iterator.scala:893)
>       at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
>       at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
>       at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
>       at 
> org.apache.spark.sql.catalyst.expressions.codegen.CodeGenerator$.recordCompilationStats(CodeGenerator.scala:916)
>       at 
> org.apache.spark.sql.catalyst.expressions.codegen.CodeGenerator$.org$apache$spark$sql$catalyst$expressions$codegen$CodeGenerator$$doCompile(CodeGenerator.scala:888)
>       at 
> org.apache.spark.sql.catalyst.expressions.codegen.CodeGenerator$$anon$1.load(CodeGenerator.scala:950)
>       at 
> org.apache.spark.sql.catalyst.expressions.codegen.CodeGenerator$$anon$1.load(CodeGenerator.scala:947)
>       at 
> org.spark_project.guava.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3599)
>       at 
> org.spark_project.guava.cache.LocalCache$Segment.loadSync(LocalCache.java:2379)
>       at 
> org.spark_project.guava.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2342)
>       at 
> org.spark_project.guava.cache.LocalCache$Segment.get(LocalCache.java:2257)
>       at org.spark_project.guava.cache.LocalCache.get(LocalCache.java:4000)
>       at 
> org.spark_project.guava.cache.LocalCache.getOrLoad(LocalCache.java:4004)
>       at 
> org.spark_project.guava.cache.LocalCache$LocalLoadingCache.get(LocalCache.java:4874)
>       at 
> org.apache.spark.sql.catalyst.expressions.codegen.CodeGenerator$.compile(CodeGenerator.scala:841)
>       at 
> org.apache.spark.sql.catalyst.expressions.codegen.GenerateMutableProjection$.create(GenerateMutableProjection.scala:140)
>       at 
> org.apache.spark.sql.catalyst.expressions.codegen.GenerateMutableProjection$.generate(GenerateMutableProjection.scala:44)
>       at 
> org.apache.spark.sql.execution.SparkPlan.newMutableProjection(SparkPlan.scala:369)
>       at 
> org.apache.spark.sql.execution.aggregate.HashAggregateExec$$anonfun$doExecute$1$$anonfun$4$$anonfun$5.apply(HashAggregateExec.scala:110)
>       at 
> org.apache.spark.sql.execution.aggregate.HashAggregateExec$$anonfun$doExecute$1$$anonfun$4$$anonfun$5.apply(HashAggregateExec.scala:109)
>       at 
> org.apache.spark.sql.execution.aggregate.AggregationIterator.generateProcessRow(AggregationIterator.scala:179)
>       at 
> org.apache.spark.sql.execution.aggregate.AggregationIterator.<init>(AggregationIterator.scala:198)
>       at 
> org.apache.spark.sql.execution.aggregate.TungstenAggregationIterator.<init>(TungstenAggregationIterator.scala:92)
>       at 
> org.apache.spark.sql.execution.aggregate.HashAggregateExec$$anonfun$doExecute$1$$anonfun$4.apply(HashAggregateExec.scala:103)
>       at 
> org.apache.spark.sql.execution.aggregate.HashAggregateExec$$anonfun$doExecute$1$$anonfun$4.apply(HashAggregateExec.scala:94)
>       at 
> org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:785)
>       at 
> org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:785)
>       at 
> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
>       at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
>       at org.apache.spark.rdd.RDD.iterator(RDD.scala:283)
>       at 
> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
>       at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
>       at org.apache.spark.rdd.RDD.iterator(RDD.scala:283)
>       at 
> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:79)
>       at 
> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:47)
>       at org.apache.spark.scheduler.Task.run(Task.scala:86)
>       at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
>       at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>       at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>       at java.lang.Thread.run(Thread.java:745)
> 16/10/14 22:35:40 WARN CodeGenerator: Error calculating stats of compiled 
> class.
> java.io.EOFException
>       at java.io.DataInputStream.readFully(DataInputStream.java:197)
>       at java.io.DataInputStream.readFully(DataInputStream.java:169)
>       at org.codehaus.janino.util.ClassFile.loadAttribute(ClassFile.java:1383)
>       at org.codehaus.janino.util.ClassFile.loadAttributes(ClassFile.java:555)
>       at org.codehaus.janino.util.ClassFile.loadFields(ClassFile.java:518)
>       at org.codehaus.janino.util.ClassFile.<init>(ClassFile.java:185)
>       at 
> org.apache.spark.sql.catalyst.expressions.codegen.CodeGenerator$$anonfun$recordCompilationStats$1.apply(CodeGenerator.scala:919)
>       at 
> org.apache.spark.sql.catalyst.expressions.codegen.CodeGenerator$$anonfun$recordCompilationStats$1.apply(CodeGenerator.scala:916)
>       at scala.collection.Iterator$class.foreach(Iterator.scala:893)
>       at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
>       at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
>       at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
>       at 
> org.apache.spark.sql.catalyst.expressions.codegen.CodeGenerator$.recordCompilationStats(CodeGenerator.scala:916)
>       at 
> org.apache.spark.sql.catalyst.expressions.codegen.CodeGenerator$.org$apache$spark$sql$catalyst$expressions$codegen$CodeGenerator$$doCompile(CodeGenerator.scala:888)
>       at 
> org.apache.spark.sql.catalyst.expressions.codegen.CodeGenerator$$anon$1.load(CodeGenerator.scala:950)
>       at 
> org.apache.spark.sql.catalyst.expressions.codegen.CodeGenerator$$anon$1.load(CodeGenerator.scala:947)
>       at 
> org.spark_project.guava.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3599)
>       at 
> org.spark_project.guava.cache.LocalCache$Segment.loadSync(LocalCache.java:2379)
>       at 
> org.spark_project.guava.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2342)
>       at 
> org.spark_project.guava.cache.LocalCache$Segment.get(LocalCache.java:2257)
>       at org.spark_project.guava.cache.LocalCache.get(LocalCache.java:4000)
>       at 
> org.spark_project.guava.cache.LocalCache.getOrLoad(LocalCache.java:4004)
>       at 
> org.spark_project.guava.cache.LocalCache$LocalLoadingCache.get(LocalCache.java:4874)
>       at 
> org.apache.spark.sql.catalyst.expressions.codegen.CodeGenerator$.compile(CodeGenerator.scala:841)
>       at 
> org.apache.spark.sql.catalyst.expressions.codegen.GenerateMutableProjection$.create(GenerateMutableProjection.scala:140)
>       at 
> org.apache.spark.sql.catalyst.expressions.codegen.GenerateMutableProjection$.generate(GenerateMutableProjection.scala:44)
>       at 
> org.apache.spark.sql.execution.SparkPlan.newMutableProjection(SparkPlan.scala:369)
>       at 
> org.apache.spark.sql.execution.aggregate.HashAggregateExec$$anonfun$doExecute$1$$anonfun$4$$anonfun$5.apply(HashAggregateExec.scala:110)
>       at 
> org.apache.spark.sql.execution.aggregate.HashAggregateExec$$anonfun$doExecute$1$$anonfun$4$$anonfun$5.apply(HashAggregateExec.scala:109)
>       at 
> org.apache.spark.sql.execution.aggregate.AggregationIterator.generateProcessRow(AggregationIterator.scala:179)
>       at 
> org.apache.spark.sql.execution.aggregate.AggregationIterator.<init>(AggregationIterator.scala:198)
>       at 
> org.apache.spark.sql.execution.aggregate.TungstenAggregationIterator.<init>(TungstenAggregationIterator.scala:92)
>       at 
> org.apache.spark.sql.execution.aggregate.HashAggregateExec$$anonfun$doExecute$1$$anonfun$4.apply(HashAggregateExec.scala:103)
>       at 
> org.apache.spark.sql.execution.aggregate.HashAggregateExec$$anonfun$doExecute$1$$anonfun$4.apply(HashAggregateExec.scala:94)
>       at 
> org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:785)
>       at 
> org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:785)
>       at 
> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
>       at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
>       at org.apache.spark.rdd.RDD.iterator(RDD.scala:283)
>       at 
> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
>       at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
>       at org.apache.spark.rdd.RDD.iterator(RDD.scala:283)
>       at 
> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
>       at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
>       at org.apache.spark.rdd.RDD.iterator(RDD.scala:283)
>       at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:63)
>       at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
>       at org.apache.spark.rdd.RDD.iterator(RDD.scala:283)
>       at 
> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
>       at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
>       at org.apache.spark.rdd.RDD.iterator(RDD.scala:283)
>       at 
> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
>       at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
>       at org.apache.spark.rdd.RDD.iterator(RDD.scala:283)
>       at 
> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
>       at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
>       at org.apache.spark.rdd.RDD.iterator(RDD.scala:283)
>       at 
> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
>       at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
>       at org.apache.spark.rdd.RDD.iterator(RDD.scala:283)
>       at 
> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
>       at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
>       at org.apache.spark.rdd.RDD.iterator(RDD.scala:283)
>       at org.apache.spark.rdd.UnionRDD.compute(UnionRDD.scala:105)
>       at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
>       at org.apache.spark.rdd.RDD.iterator(RDD.scala:283)
>       at 
> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
>       at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
>       at org.apache.spark.rdd.RDD$$anonfun$8.apply(RDD.scala:332)
>       at org.apache.spark.rdd.RDD$$anonfun$8.apply(RDD.scala:330)
>       at 
> org.apache.spark.storage.BlockManager$$anonfun$doPutIterator$1.apply(BlockManager.scala:935)
>       at 
> org.apache.spark.storage.BlockManager$$anonfun$doPutIterator$1.apply(BlockManager.scala:926)
>       at org.apache.spark.storage.BlockManager.doPut(BlockManager.scala:866)
>       at 
> org.apache.spark.storage.BlockManager.doPutIterator(BlockManager.scala:926)
>       at 
> org.apache.spark.storage.BlockManager.getOrElseUpdate(BlockManager.scala:670)
>       at org.apache.spark.rdd.RDD.getOrCompute(RDD.scala:330)
>       at org.apache.spark.rdd.RDD.iterator(RDD.scala:281)
>       at 
> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
>       at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
>       at org.apache.spark.rdd.RDD.iterator(RDD.scala:283)
>       at 
> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
>       at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
>       at org.apache.spark.rdd.RDD.iterator(RDD.scala:283)
>       at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70)
>       at org.apache.spark.scheduler.Task.run(Task.scala:86)
>       at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
>       at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>       at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>       at java.lang.Thread.run(Thread.java:745)
> 16/10/14 22:35:41 WARN Executor: 1 block locks were not released by TID = 
> 31852:
> [rdd_1053_47]
> 16/10/14 22:35:41 WARN Executor: 1 block locks were not released by TID = 
> 32252:
> [rdd_1053_447]
> 16/10/14 22:35:41 WARN Executor: 1 block locks were not released by TID = 
> 32603:
> [rdd_1053_803]
> 16/10/14 22:35:42 WARN Executor: 1 block locks were not released by TID = 
> 32690:
> [rdd_1053_890]
> 16/10/14 22:35:43 WARN Executor: 1 block locks were not released by TID = 
> 33084:
> [rdd_1053_1289]
> 16/10/14 22:35:43 WARN Executor: 1 block locks were not released by TID = 
> 33678:
> [rdd_1053_1890]
> 16/10/14 22:35:43 WARN Executor: 1 block locks were not released by TID = 
> 33283:
> [rdd_1053_1489]
> 16/10/14 22:37:58 WARN Executor: 1 block locks were not released by TID = 
> 46624:
> [rdd_854_10]
> 16/10/14 22:37:58 WARN Executor: 1 block locks were not released by TID = 
> 46654:
> [rdd_854_40]
> 16/10/14 22:37:58 WARN Executor: 1 block locks were not released by TID = 
> 46644:
> [rdd_854_30]
> 16/10/14 22:37:58 WARN Executor: 1 block locks were not released by TID = 
> 46634:
> [rdd_854_20]
> 16/10/14 22:37:58 WARN Executor: 1 block locks were not released by TID = 
> 46664:
> [rdd_854_51]
> 16/10/14 22:37:58 WARN Executor: 1 block locks were not released by TID = 
> 46803:
> [rdd_854_193]
> 16/10/14 22:37:58 WARN Executor: 1 block locks were not released by TID = 
> 46801:
> [rdd_854_191]
> 16/10/14 22:37:58 WARN Executor: 1 block locks were not released by TID = 
> 46808:
> [rdd_854_198]
> 16/10/14 22:37:58 WARN Executor: 1 block locks were not released by TID = 
> 46805:



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to