The triangle count also failed for me when I ran it on more than one node.
There is this assertion in TriangleCount.scala that causes the failure:
// double count should be even (divisible by two)
assert((dblCount & 1) == 0)
That did not hold true when I ran this on multiple nodes, even when
following the guidelines to make sure that all source ids are greater than
destination ids and partitioning the graph using RandomVertexCut. I didn't
dig into the code to see why this assertion was failing, but commenting
that line out allowed the code to run. I'm not sure how much I can trust
the results, but they looked generally right.
On Wed, Sep 10, 2014 at 6:31 PM, Vipul Pandey wrote:
> Hi,
>
> I have a small graph with about 3.3M vertices and close to 7.5M edges.
> It's a pretty innocent graph with the max degree of 8.
> Unfortunately, graph.traingleCount is failing on me with the exception
> below. I'm running a spark-shell on CDH5.1 with the following params :
> SPARK_DRIVER_MEM=10g ADD_JARS=./path/to/my-jar-with-dependencies.jar
> SPARK_WORKER_INSTANCES=120 SPARK_WORKER_MEMORY=5g
> SPARK_YARN_APP_NAME=VipulsSparkShell MASTER=yarn-client spark-shell
>
> Any clue anyone?
> Vipul
>
>
> 14/09/10 16:12:22 INFO cluster.YarnClientClusterScheduler: Stage 80 was
> cancelled
> 14/09/10 16:12:22 INFO scheduler.TaskSetManager: Loss was due to
> java.lang.AssertionError: assertion failed [duplicate 8]
> 14/09/10 16:12:22 WARN scheduler.TaskSetManager: Task 326 was killed.
> 14/09/10 16:12:22 WARN scheduler.TaskSetManager: Task 325 was killed.
> 14/09/10 16:12:22 WARN scheduler.TaskSetManager: Task 320 was killed.
> 14/09/10 16:12:22 WARN scheduler.TaskSetManager: Task 324 was killed.
> 14/09/10 16:12:22 WARN scheduler.TaskSetManager: Task 322 was killed.
>
> org.apache.spark.SparkException: Job aborted due to stage failure: Task
> 80.0:6 failed 4 times, most recent failure: Exception failure in TID 321 on
> host abc.xyz.com: java.lang.AssertionError: assertion failed
> scala.Predef$.assert(Predef.scala:165)
>
> org.apache.spark.graphx.lib.TriangleCount$$anonfun$run$1.apply(TriangleCount.scala:89)
>
> org.apache.spark.graphx.lib.TriangleCount$$anonfun$run$1.apply(TriangleCount.scala:86)
>
> org.apache.spark.graphx.impl.VertexPartitionBaseOps.leftJoin(VertexPartitionBaseOps.scala:125)
>
> org.apache.spark.graphx.VertexRDD$$anonfun$3.apply(VertexRDD.scala:192)
>
> org.apache.spark.graphx.VertexRDD$$anonfun$3.apply(VertexRDD.scala:189)
>
> org.apache.spark.rdd.ZippedPartitionsRDD2.compute(ZippedPartitionsRDD.scala:87)
> org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
> org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
>
> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
> org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
> org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
>
> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:158)
>
> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99)
> org.apache.spark.scheduler.Task.run(Task.scala:51)
>
> org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:187)
>
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> java.lang.Thread.run(Thread.java:744)
> Driver stacktrace:
> at org.apache.spark.scheduler.DAGScheduler.org
> $apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1033)
> at
> org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1017)
> at
> org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1015)
> at
> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
> at
> scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
> at
> org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1015)
> at
> org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:633)
> at
> org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:633)
> at scala.Option.foreach(Option.scala:236)
> at
> org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:633)
> at
> org.apache.spark.scheduler.DAGSchedulerEventProcessActor$$anonfun$receive$2.applyOrElse(DAGScheduler.scala:1207)
> at akka.actor.ActorCell.receiveMessage(ActorCell.scala:498)
> at akka.actor.ActorCell.invoke(ActorCell.scala:456)
> at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:237)
> at akka.dispatch.Mailbox.run(Mailbox.scala:219)
> at
> akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:386)
> at
> scala.concurrent.forkjo