Hi Jungtaek Lim,

SparkInterpreter uses scala REPL inside. Please see related issue
https://issues.scala-lang.org/browse/SI-4331. There's a workaround in the
description.

But I believe there will be no easy way to free up the memory completely,
unless destroy and create scala REPL again.

Thanks,
moon

On Thu, Dec 24, 2015 at 9:27 AM Jungtaek Lim <kabh...@gmail.com> wrote:

> Forgot to add error log and stack traces,
>
> 15/12/16 11:17:02 INFO SchedulerFactory: Job
> remoteInterpretJob_1450232220684 started by scheduler
> org.apache.zeppelin.spark.SparkInterpreter2005736637
> 15/12/16 11:17:08 ERROR Job: Job failed
> java.lang.OutOfMemoryError: Java heap space
>         at scala.reflect.internal.Names$class.enterChars(Names.scala:69)
>         at scala.reflect.internal.Names$class.newTermName(Names.scala:104)
>         at
> scala.reflect.internal.SymbolTable.newTermName(SymbolTable.scala:13)
>         at scala.reflect.internal.Names$class.newTermName(Names.scala:113)
>         at
> scala.reflect.internal.SymbolTable.newTermName(SymbolTable.scala:13)
>         at scala.reflect.internal.Names$class.newTypeName(Names.scala:116)
>         at
> scala.reflect.internal.SymbolTable.newTypeName(SymbolTable.scala:13)
>         at scala.reflect.internal.Names$TypeName.newName(Names.scala:531)
>         at scala.reflect.internal.Names$TypeName.newName(Names.scala:513)
>         at scala.reflect.internal.Names$Name.append(Names.scala:424)
>         at
> scala.reflect.internal.Symbols$Symbol.fullNameInternal(Symbols.scala:1044)
>         at
> scala.reflect.internal.Symbols$Symbol.fullNameAsName(Symbols.scala:1047)
>         at
> scala.reflect.internal.Symbols$Symbol.fullNameInternal(Symbols.scala:1044)
>         at
> scala.reflect.internal.Symbols$Symbol.fullNameAsName(Symbols.scala:1047)
>         at
> scala.reflect.internal.Symbols$Symbol.fullNameInternal(Symbols.scala:1044)
>         at
> scala.reflect.internal.Symbols$Symbol.fullNameAsName(Symbols.scala:1047)
>         at
> scala.reflect.internal.Symbols$Symbol.fullNameInternal(Symbols.scala:1044)
>         at
> scala.reflect.internal.Symbols$Symbol.fullNameAsName(Symbols.scala:1047)
>         at
> scala.reflect.internal.Symbols$Symbol.fullNameInternal(Symbols.scala:1044)
>         at
> scala.reflect.internal.Symbols$Symbol.fullNameAsName(Symbols.scala:1047)
>         at
> scala.reflect.internal.Symbols$Symbol.fullNameInternal(Symbols.scala:1044)
>         at
> scala.reflect.internal.Symbols$Symbol.fullNameAsName(Symbols.scala:1047)
>         at
> scala.reflect.internal.Symbols$Symbol.fullNameInternal(Symbols.scala:1044)
>         at
> scala.reflect.internal.Symbols$Symbol.fullNameAsName(Symbols.scala:1047)
>         at
> scala.reflect.internal.Symbols$Symbol.fullNameInternal(Symbols.scala:1044)
>         at
> scala.reflect.internal.Symbols$Symbol.fullNameAsName(Symbols.scala:1047)
>         at
> scala.reflect.internal.Symbols$Symbol.fullNameInternal(Symbols.scala:1044)
>         at
> scala.reflect.internal.Symbols$Symbol.fullNameAsName(Symbols.scala:1047)
>         at
> scala.reflect.internal.Symbols$Symbol.fullNameInternal(Symbols.scala:1044)
>         at
> scala.reflect.internal.Symbols$Symbol.fullNameAsName(Symbols.scala:1047)
>         at
> scala.reflect.internal.Symbols$Symbol.fullNameInternal(Symbols.scala:1044)
>         at
> scala.reflect.internal.Symbols$Symbol.fullNameAsName(Symbols.scala:1047)
> 15/12/16 11:17:08 INFO SchedulerFactory: Job
> remoteInterpretJob_1450232220684 finished by scheduler
> org.apache.zeppelin.spark.SparkInterpreter2005736637
>
> Same logs are printed whenever run new job after OOME.
>
>
> On Thu, Dec 24, 2015 at 9:25 AM, Jungtaek Lim <kabh...@gmail.com> wrote:
>
>> Hi users,
>>
>> I've met OOME when using spark interpreter and wish to resolve this issue.
>>
>> - Spark version: 1.4.1 + applying SPARK-11818
>> <http://issues.apache.org/jira/browse/SPARK-11818>
>> - Spark cluster: Mesos 0.22.1
>> - Zeppelin: commit 1ba6e2a
>> <https://github.com/apache/incubator-zeppelin/commit/1ba6e2a5969e475bc926943885c120f793266147>
>>  +
>> applying ZEPPELIN-507
>> <https://issues.apache.org/jira/browse/ZEPPELIN-507> & ZEPPELIN-509
>> <https://issues.apache.org/jira/browse/ZEPPELIN-509>
>> - loaded one fat driver jar via %dep
>>
>> I've run paragraph which dumps hbase table to hdfs several times, and
>> takes memory histogram via "jmap -histo:live <pid>".
>> Looking at histograms I can see that interpreter memory usages is
>> increased whenever I run the paragraph.
>> There could be spark app's memory leak, but nothing is clear so I'd like
>> to find any other users who see the same behavior.
>>
>> Is anyone seeing same behavior, and could you share how to resolve?
>>
>> Thanks,
>> Jungtaek Lim (HeartSaVioR)
>>
>
>
>
> --
> Name : Jungtaek Lim
> Blog : http://medium.com/@heartsavior
> Twitter : http://twitter.com/heartsavior
> LinkedIn : http://www.linkedin.com/in/heartsavior
>

Reply via email to