[ 
https://issues.apache.org/jira/browse/SYSTEMML-911?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15485745#comment-15485745
 ] 

Glenn Weidner edited comment on SYSTEMML-911 at 9/13/16 12:24 AM:
------------------------------------------------------------------

Attached scala that was run from spark-shell launched with:
./bin/spark-shell --executor-memory 4G --driver-memory 4G --jars 
../systemml/lib/systemml.jar


was (Author: gweidner):
Attached scala that was run from spark-shell.

> GC overhead limit exceeded running LinearRegressionCG from MLContext
> --------------------------------------------------------------------
>
>                 Key: SYSTEMML-911
>                 URL: https://issues.apache.org/jira/browse/SYSTEMML-911
>             Project: SystemML
>          Issue Type: Bug
>          Components: APIs
>            Reporter: Glenn Weidner
>         Attachments: LinearRegrCG.0.10.scala
>
>
> Running attached scala from spark-shell using original MLContext against 
> Spark 1.6 (or 2.0) encountered out-of-memory GC overhead limit exceeded:
> uncaught exception during compilation: java.lang.AssertionError
> org.apache.sysml.runtime.DMLRuntimeException: 
> org.apache.sysml.runtime.DMLRuntimeException: ERROR: Runtime error in program 
> block generated from statement block between lines 3 and 9 -- Error 
> evaluating instruction: 
> SPARK°rblk°X·MATRIX·DOUBLE°_mVar2·MATRIX·DOUBLE°1000°1000°true
>       at 
> org.apache.sysml.runtime.controlprogram.Program.execute(Program.java:152)
>       at 
> org.apache.sysml.api.MLContext.executeUsingSimplifiedCompilationChain(MLContext.java:1398)
>       at 
> org.apache.sysml.api.MLContext.compileAndExecuteScript(MLContext.java:1257)
>       at org.apache.sysml.api.MLContext.executeScript(MLContext.java:1146)
>       at org.apache.sysml.api.MLContext.executeScript(MLContext.java:1136)
>       at org.apache.sysml.api.MLContext.executeScript(MLContext.java:1131)
>       at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:32)
>       at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:37)
>       at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:39)
>       at $iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:41)
>       at $iwC$$iwC$$iwC$$iwC.<init>(<console>:43)
>       at $iwC$$iwC$$iwC.<init>(<console>:45)
>       at $iwC$$iwC.<init>(<console>:47)
>       at $iwC.<init>(<console>:49)
>       at <init>(<console>:51)
>       at .<init>(<console>:55)
>       at .<clinit>(<console>)
>       at .<init>(<console>:7)
>       at .<clinit>(<console>)
>       at $print(<console>)
>       at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>       at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>       at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>       at java.lang.reflect.Method.invoke(Method.java:498)
>       at 
> org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:1065)
>       at 
> org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1346)
>       at 
> org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:840)
>       at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:871)
>       at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:819)
>       at 
> org.apache.spark.repl.SparkILoop.reallyInterpret$1(SparkILoop.scala:857)
>       at 
> org.apache.spark.repl.SparkILoop.interpretStartingWith(SparkILoop.scala:902)
>       at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:814)
>       at org.apache.spark.repl.SparkILoop.processLine$1(SparkILoop.scala:657)
>       at org.apache.spark.repl.SparkILoop.innerLoop$1(SparkILoop.scala:665)
>       at 
> org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$loop(SparkILoop.scala:670)
>       at 
> org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply$mcZ$sp(SparkILoop.scala:997)
>       at 
> org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
>       at 
> org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
>       at 
> scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:135)
>       at 
> org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$process(SparkILoop.scala:945)
>       at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:1059)
>       at org.apache.spark.repl.Main$.main(Main.scala:31)
>       at org.apache.spark.repl.Main.main(Main.scala)
>       at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>       at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>       at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>       at java.lang.reflect.Method.invoke(Method.java:498)
>       at 
> org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:731)
>       at 
> org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
>       at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
>       at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
>       at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
> Caused by: org.apache.sysml.runtime.DMLRuntimeException: ERROR: Runtime error 
> in program block generated from statement block between lines 3 and 9 -- 
> Error evaluating instruction: 
> SPARK°rblk°X·MATRIX·DOUBLE°_mVar2·MATRIX·DOUBLE°1000°1000°true
>       at 
> org.apache.sysml.runtime.controlprogram.ProgramBlock.executeSingleInstruction(ProgramBlock.java:333)
>       at 
> org.apache.sysml.runtime.controlprogram.ProgramBlock.executeInstructions(ProgramBlock.java:222)
>       at 
> org.apache.sysml.runtime.controlprogram.ProgramBlock.execute(ProgramBlock.java:166)
>       at 
> org.apache.sysml.runtime.controlprogram.Program.execute(Program.java:145)
>       ... 51 more
> Caused by: org.apache.spark.SparkException: Job aborted due to stage failure: 
> Task 0 in stage 6.0 failed 1 times, most recent failure: Lost task 0.0 in 
> stage 6.0 (TID 10, localhost): java.lang.OutOfMemoryError: GC overhead limit 
> exceeded
>       at 
> org.apache.sysml.runtime.instructions.spark.utils.RDDConverterUtilsExt$RowToBinaryBlockFunctionHelper.flushBlocksToList(RDDConverterUtilsExt.java:800)
>       at 
> org.apache.sysml.runtime.instructions.spark.utils.RDDConverterUtilsExt$RowToBinaryBlockFunctionHelper.convertToBinaryBlock(RDDConverterUtilsExt.java:736)
>       at 
> org.apache.sysml.runtime.instructions.spark.utils.RDDConverterUtilsExt$DataFrameToBinaryBlockFunction.call(RDDConverterUtilsExt.java:463)
>       at 
> org.apache.sysml.runtime.instructions.spark.utils.RDDConverterUtilsExt$DataFrameToBinaryBlockFunction.call(RDDConverterUtilsExt.java:448)
>       at 
> org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$7$1.apply(JavaRDDLike.scala:192)
>       at 
> org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$7$1.apply(JavaRDDLike.scala:192)
>       at 
> org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$20.apply(RDD.scala:710)
>       at 
> org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$20.apply(RDD.scala:710)
>       at 
> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
>       at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
>       at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
>       at 
> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73)
>       at 
> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
>       at org.apache.spark.scheduler.Task.run(Task.scala:89)
>       at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
>       at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>       at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>       at java.lang.Thread.run(Thread.java:745)
> Driver stacktrace:
>       at 
> org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1431)
>       at 
> org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1419)
>       at 
> org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1418)
>       at 
> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
>       at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
>       at 
> org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1418)
>       at 
> org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:799)
>       at 
> org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:799)
>       at scala.Option.foreach(Option.scala:236)
>       at 
> org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:799)
>       at 
> org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1640)
>       at 
> org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1599)
>       at 
> org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1588)
>       at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
>       at 
> org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:620)
>       at org.apache.spark.SparkContext.runJob(SparkContext.scala:1832)
>       at org.apache.spark.SparkContext.runJob(SparkContext.scala:1952)
>       at org.apache.spark.rdd.RDD$$anonfun$aggregate$1.apply(RDD.scala:1114)
>       at 
> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
>       at 
> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:111)
>       at org.apache.spark.rdd.RDD.withScope(RDD.scala:316)
>       at org.apache.spark.rdd.RDD.aggregate(RDD.scala:1107)
>       at 
> org.apache.spark.api.java.JavaRDDLike$class.aggregate(JavaRDDLike.scala:411)
>       at 
> org.apache.spark.api.java.AbstractJavaRDDLike.aggregate(JavaRDDLike.scala:46)
>       at 
> org.apache.sysml.runtime.instructions.spark.utils.SparkUtils.computeNNZFromBlocks(SparkUtils.java:458)
>       at 
> org.apache.sysml.runtime.controlprogram.context.SparkExecutionContext.writeRDDtoHDFS(SparkExecutionContext.java:802)
>       at 
> org.apache.sysml.runtime.controlprogram.caching.MatrixObject.readBlobFromRDD(MatrixObject.java:612)
>       at 
> org.apache.sysml.runtime.controlprogram.caching.MatrixObject.readBlobFromRDD(MatrixObject.java:62)
>       at 
> org.apache.sysml.runtime.controlprogram.caching.CacheableData.acquireRead(CacheableData.java:440)
>       at 
> org.apache.sysml.hops.recompile.Recompiler.executeInMemoryReblock(Recompiler.java:2067)
>       at 
> org.apache.sysml.runtime.instructions.spark.ReblockSPInstruction.processInstruction(ReblockSPInstruction.java:100)
>       at 
> org.apache.sysml.runtime.controlprogram.ProgramBlock.executeSingleInstruction(ProgramBlock.java:303)
>       ... 54 more
> Caused by: java.lang.OutOfMemoryError: GC overhead limit exceeded
>       at 
> org.apache.sysml.runtime.instructions.spark.utils.RDDConverterUtilsExt$RowToBinaryBlockFunctionHelper.flushBlocksToList(RDDConverterUtilsExt.java:800)
>       at 
> org.apache.sysml.runtime.instructions.spark.utils.RDDConverterUtilsExt$RowToBinaryBlockFunctionHelper.convertToBinaryBlock(RDDConverterUtilsExt.java:736)
>       at 
> org.apache.sysml.runtime.instructions.spark.utils.RDDConverterUtilsExt$DataFrameToBinaryBlockFunction.call(RDDConverterUtilsExt.java:463)
>       at 
> org.apache.sysml.runtime.instructions.spark.utils.RDDConverterUtilsExt$DataFrameToBinaryBlockFunction.call(RDDConverterUtilsExt.java:448)
>       at 
> org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$7$1.apply(JavaRDDLike.scala:192)
>       at 
> org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$7$1.apply(JavaRDDLike.scala:192)
>       at 
> org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$20.apply(RDD.scala:710)
>       at 
> org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$20.apply(RDD.scala:710)
>       at 
> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
>       at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
>       at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
>       at 
> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73)
>       at 
> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
>       at org.apache.spark.scheduler.Task.run(Task.scala:89)
>       at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
>       at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>       at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>       at java.lang.Thread.run(Thread.java:745)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to