[
https://issues.apache.org/jira/browse/SPARK-5809?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Sean Owen updated SPARK-5809:
-----------------------------
Priority: Minor (was: Major)
Assignee: Joseph K. Bradley
Labels: easyfix (was: easyfix newbie)
I don't think that solves it, in the sense that debug logging is already off by
default. You don't have debug logging turned on do you?
If not, I suppose the problem is that the large debug string is evaluated even
though it's not logged. But I thought that given the way the logDebug method is
declared, it's lazily evaluated inside after the log level is checked.
I am not sure why that string would be so large in any event, unless you have a
huge number of features?
CC JK Bradley as he might be able to quickly evaluate this.
> OutOfMemoryError in logDebug in RandomForest.scala
> --------------------------------------------------
>
> Key: SPARK-5809
> URL: https://issues.apache.org/jira/browse/SPARK-5809
> Project: Spark
> Issue Type: Bug
> Components: MLlib
> Affects Versions: 1.2.0
> Reporter: Devesh Parekh
> Assignee: Joseph K. Bradley
> Priority: Minor
> Labels: easyfix
>
> When training a GBM on sparse vectors produced by HashingTF, I get the
> following OutOfMemoryError, where RandomForest is building a debug string to
> log.
> Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
> at java.util.Arrays.copyOf(Arrays.java:3326)
> at
> java.lang.AbstractStringBuilder.expandCapacity(AbstractStringBuilder.java:137)
> at
> java.lang.AbstractStringBuilder.ensureCapacityInternal(AbstractStringBuilder.java:121
> )
> at
> java.lang.AbstractStringBuilder.append(AbstractStringBuilder.java:421)
> at java.lang.StringBuilder.append(StringBuilder.java:136)
> at
> scala.collection.mutable.StringBuilder.append(StringBuilder.scala:197)
> at
> scala.collection.TraversableOnce$$anonfun$addString$1.apply(TraversableOnce.scala:327
> )
> at scala.collection.Iterator$class.foreach(Iterator.scala:727)
> at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
> at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
> at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
> at
> scala.collection.TraversableOnce$class.addString(TraversableOnce.scala:320)
> at
> scala.collection.AbstractTraversable.addString(Traversable.scala:105)
> at
> scala.collection.TraversableOnce$class.mkString(TraversableOnce.scala:286)
> at
> scala.collection.AbstractTraversable.mkString(Traversable.scala:105)
> at
> scala.collection.TraversableOnce$class.mkString(TraversableOnce.scala:288)
> at
> scala.collection.AbstractTraversable.mkString(Traversable.scala:105)
> at
> org.apache.spark.mllib.tree.RandomForest$$anonfun$run$9.apply(RandomForest.scala:152)
> at
> org.apache.spark.mllib.tree.RandomForest$$anonfun$run$9.apply(RandomForest.scala:152)
> at org.apache.spark.Logging$class.logDebug(Logging.scala:63)
> at
> org.apache.spark.mllib.tree.RandomForest.logDebug(RandomForest.scala:67)
> at
> org.apache.spark.mllib.tree.RandomForest.run(RandomForest.scala:150)
> at org.apache.spark.mllib.tree.DecisionTree.run(DecisionTree.scala:64)
> at
> org.apache.spark.mllib.tree.GradientBoostedTrees$.org$apache$spark$mllib$tree$GradientBoostedTrees$$boost(GradientBoostedTrees.scala:150)
> at
> org.apache.spark.mllib.tree.GradientBoostedTrees.run(GradientBoostedTrees.scala:63)
>
> at
> org.apache.spark.mllib.tree.GradientBoostedTrees$.train(GradientBoostedTrees.scala:96)
> A workaround until this is fixed is to modify log4j.properties in the conf
> directory to filter out debug logs in RandomForest. For example:
> log4j.logger.org.apache.spark.mllib.tree.RandomForest=WARN
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]