[
https://issues.apache.org/jira/browse/SPARK-6864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14491306#comment-14491306
]
John Canny commented on SPARK-6864:
-----------------------------------
Spark was started on EC2 with this command (I just retried with Spark 1.3, the
original report was on 1.2):
./spark-ec2 -k "pils_rsa" -i /home/ec2-user/.ssh/jfc_rsa -s 8
--instance-type=r3.4xlarge --region=us-west-2 launch sparkcluster
also tried up to 96 nodes of m3.2xlarge, I havent had any trouble running most
MLlib routines in this config, and single-target logistic regression works. I
can run up to 3 targets using the same (Multilabel classifier) code, but it
runs out of memory with more than that. The model size is around 30 MB with 100
targets, so this behavior is puzzling.
I started spark-shell on the master node, and entered the commands above.
Everything should be default values - I dont have any config scripts.
I looked at "spark.executor.memory" in the "environment" tab at master:4040 and
also from the command line by looking it up as a property of the Spark Config
of the default spark context. Both report 110 GB for the r3.4xlarge instance.
The log showed that it died in task 6. Here is more detail from the log:
15/04/12 02:38:03 INFO storage.BlockManagerInfo: Added taskresult_12 in memory
on ip-10-46-128-197.us-west-2.compute.internal:47136 (size: 19.1 MB, free: 27.3
GB)
15/04/12 02:38:04 ERROR util.Utils: Uncaught exception in thread
task-result-getter-3
java.lang.OutOfMemoryError: Java heap space
at
org.apache.spark.scheduler.DirectTaskResult$$anonfun$readExternal$1.apply$mcV$sp(TaskResult.scala:61)
at org.apache.spark.util.Utils$.tryOrIOException(Utils.scala:1137)
at
org.apache.spark.scheduler.DirectTaskResult.readExternal(TaskResult.scala:58)
at
java.io.ObjectInputStream.readExternalData(ObjectInputStream.java:1837)
at
java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1796)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:370)
at
org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:68)
at
org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:88)
at
org.apache.spark.scheduler.TaskResultGetter$$anon$2$$anonfun$run$1.apply$mcV$sp(TaskResultGetter.scala:75)
at
org.apache.spark.scheduler.TaskResultGetter$$anon$2$$anonfun$run$1.apply(TaskResultGetter.scala:51)
at
org.apache.spark.scheduler.TaskResultGetter$$anon$2$$anonfun$run$1.apply(TaskResultGetter.scala:51)
at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1617)
at
org.apache.spark.scheduler.TaskResultGetter$$anon$2.run(TaskResultGetter.scala:50)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Exception in thread "task-result-getter-3" java.lang.OutOfMemoryError: Java
heap space
at
org.apache.spark.scheduler.DirectTaskResult$$anonfun$readExternal$1.apply$mcV$sp(TaskResult.scala:61)
at org.apache.spark.util.Utils$.tryOrIOException(Utils.scala:1137)
at
org.apache.spark.scheduler.DirectTaskResult.readExternal(TaskResult.scala:58)
at
java.io.ObjectInputStream.readExternalData(ObjectInputStream.java:1837)
at
java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1796)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:370)
at
org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:68)
at
org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:88)
at
org.apache.spark.scheduler.TaskResultGetter$$anon$2$$anonfun$run$1.apply$mcV$sp(TaskResultGetter.scala:75)
at
org.apache.spark.scheduler.TaskResultGetter$$anon$2$$anonfun$run$1.apply(TaskResultGetter.scala:51)
at
org.apache.spark.scheduler.TaskResultGetter$$anon$2$$anonfun$run$1.apply(TaskResultGetter.scala:51)
at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1617)
at
org.apache.spark.scheduler.TaskResultGetter$$anon$2.run(TaskResultGetter.scala:50)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
15/04/12 02:38:04 ERROR util.Utils: Uncaught exception in thread
task-result-getter-0
java.lang.OutOfMemoryError: Java heap space
at
org.apache.spark.scheduler.DirectTaskResult$$anonfun$readExternal$1.apply$mcV$sp(TaskResult.scala:61)
at org.apache.spark.util.Utils$.tryOrIOException(Utils.scala:1137)
at
org.apache.spark.scheduler.DirectTaskResult.readExternal(TaskResult.scala:58)
at
java.io.ObjectInputStream.readExternalData(ObjectInputStream.java:1837)
at
java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1796)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:370)
at
org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:68)
at
org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:88)
at
org.apache.spark.scheduler.TaskResultGetter$$anon$2$$anonfun$run$1.apply$mcV$sp(TaskResultGetter.scala:75)
at
org.apache.spark.scheduler.TaskResultGetter$$anon$2$$anonfun$run$1.apply(TaskResultGetter.scala:51)
at
org.apache.spark.scheduler.TaskResultGetter$$anon$2$$anonfun$run$1.apply(TaskResultGetter.scala:51)
at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1617)
at
org.apache.spark.scheduler.TaskResultGetter$$anon$2.run(TaskResultGetter.scala:50)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Exception in thread "task-result-getter-0" java.lang.OutOfMemoryError: Java
heap space
at
org.apache.spark.scheduler.DirectTaskResult$$anonfun$readExternal$1.apply$mcV$sp(TaskResult.scala:61)
at org.apache.spark.util.Utils$.tryOrIOException(Utils.scala:1137)
at
org.apache.spark.scheduler.DirectTaskResult.readExternal(TaskResult.scala:58)
at
java.io.ObjectInputStream.readExternalData(ObjectInputStream.java:1837)
at
java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1796)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:370)
at
org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:68)
at
org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:88)
at
org.apache.spark.scheduler.TaskResultGetter$$anon$2$$anonfun$run$1.apply$mcV$sp(TaskResultGetter.scala:75)
at
org.apache.spark.scheduler.TaskResultGetter$$anon$2$$anonfun$run$1.apply(TaskResultGetter.scala:51)
at
org.apache.spark.scheduler.TaskResultGetter$$anon$2$$anonfun$run$1.apply(TaskResultGetter.scala:51)
at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1617)
at
org.apache.spark.scheduler.TaskResultGetter$$anon$2.run(TaskResultGetter.scala:50)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
> Spark's Multilabel Classifier runs out of memory on small datasets
> ------------------------------------------------------------------
>
> Key: SPARK-6864
> URL: https://issues.apache.org/jira/browse/SPARK-6864
> Project: Spark
> Issue Type: Test
> Components: MLlib
> Affects Versions: 1.2.1
> Environment: EC2 with 8-96 instances up to r3.4xlarge
> The test fails on every configuration
> Reporter: John Canny
> Fix For: 1.2.1
>
>
> When trying to run Spark's MultiLabel classifier
> (LogisticRegressionWithLBFGS) on the RCV1 V2 dataset (about 0.5GB, 100
> labels), the classifier runs out of memory. The number of tasks per executor
> doesnt seem to matter. It happens even with a single task per 120 GB
> executor. The dataset is the concatenation of the test files from the "rcv1v2
> (topics; full sets)" group here:
> http://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/multilabel.html
> Here's the code:
> import org.apache.spark.SparkContext
> import org.apache.spark.mllib.classification.LogisticRegressionWithLBFGS
> import org.apache.spark.mllib.evaluation.BinaryClassificationMetrics
> import org.apache.spark.mllib.optimization.L1Updater
> import org.apache.spark.mllib.regression.LabeledPoint
> import org.apache.spark.mllib.linalg.Vectors
> import org.apache.spark.mllib.util.MLUtils
> import scala.compat.Platform._
> val nnodes = 8
> val t0=currentTime
> // Load training data in LIBSVM format.
> val train = MLUtils.loadLibSVMFile(sc, "s3n://bidmach/RCV1train.libsvm",
> true, 276544, nnodes)
> val test = MLUtils.loadLibSVMFile(sc, "s3n://bidmach/RCV1test.libsvm", true,
> 276544, nnodes)
> val t1=currentTime;
> val lrAlg = new LogisticRegressionWithLBFGS()
> lrAlg.setNumClasses(100).optimizer.
> setNumIterations(10).
> setRegParam(1e-10).
> setUpdater(new L1Updater)
> // Run training algorithm to build the model
> val model = lrAlg.run(train)
> val t2=currentTime
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]