Hi,

(I haven't played with GraphFrames)

What's your `sc.master`? How do you run your application --
spark-submit or java -jar or sbt run or...? The reason I'm asking is
that few options might not be in use whatsoever, e.g.
spark.driver.memory and spark.executor.memory in local mode.

Pozdrawiam,
Jacek Laskowski
----
https://medium.com/@jaceklaskowski/
Mastering Apache Spark http://bit.ly/mastering-apache-spark
Follow me at https://twitter.com/jaceklaskowski


On Sat, Apr 9, 2016 at 7:51 PM, Buntu Dev <buntu...@gmail.com> wrote:
> I'm running this motif pattern against 1.5M vertices (5.5mb) and 10M (60mb)
> edges:
>
>  tgraph.find("(a)-[]->(b); (c)-[]->(b); (c)-[]->(d)")
>
> I keep running into Java heap space errors:
>
> ~~~~~
>
> ERROR actor.ActorSystemImpl: Uncaught fatal error from thread
> [sparkDriver-akka.actor.default-dispatcher-33] shutting down ActorSystem
> [sparkDriver]
> java.lang.OutOfMemoryError: Java heap space
> at scala.reflect.ManifestFactory$$anon$6.newArray(Manifest.scala:90)
> at scala.reflect.ManifestFactory$$anon$6.newArray(Manifest.scala:88)
> at scala.Array$.ofDim(Array.scala:218)
> at akka.util.ByteIterator.toArray(ByteIterator.scala:462)
> at akka.util.ByteString.toArray(ByteString.scala:321)
> at
> akka.remote.transport.AkkaPduProtobufCodec$.decodePdu(AkkaPduCodec.scala:168)
> at
> akka.remote.transport.ProtocolStateActor.akka$remote$transport$ProtocolStateActor$$decodePdu(AkkaProtocolTransport.scala:513)
> at
> akka.remote.transport.ProtocolStateActor$$anonfun$5.applyOrElse(AkkaProtocolTransport.scala:357)
> at
> akka.remote.transport.ProtocolStateActor$$anonfun$5.applyOrElse(AkkaProtocolTransport.scala:352)
> at
> scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:33)
> at akka.actor.FSM$class.processEvent(FSM.scala:595)
> at
> akka.remote.transport.ProtocolStateActor.processEvent(AkkaProtocolTransport.scala:220)
> at akka.actor.FSM$class.akka$actor$FSM$$processMsg(FSM.scala:589)
> at akka.actor.FSM$$anonfun$receive$1.applyOrElse(FSM.scala:583)
> at akka.actor.ActorCell.receiveMessage(ActorCell.scala:498)
> at akka.actor.ActorCell.invoke(ActorCell.scala:456)
> at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:237)
> at akka.dispatch.Mailbox.run(Mailbox.scala:219)
> at
> akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:386)
> at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
> at
> scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
> at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
> at
> scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
>
> ~~~~~
>
>
> Here is my config:
>
> conf.set("spark.executor.memory", "8192m")
> conf.set("spark.executor.cores", 4)
> conf.set("spark.driver.memory", "10240m")
> conf.set("spark.driver.maxResultSize", "2g")
> conf.set("spark.kryoserializer.buffer.max", "1024mb")
>
>
> Wanted to know if there are any other configs to tweak?
>
>
> Thanks!

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to