I think your *sparkUrl *points to an invalid cluster url. Just make sure
you are giving the correct url (the one you see on top left in the
master:8080 webUI).

Thanks
Best Regards


On Tue, Aug 26, 2014 at 11:07 AM, Forest D <dev24a...@gmail.com> wrote:

> Hi Jonathan,
>
> Thanks for the reply. I ran other exercises (movie recommendation and
> GraphX) on the same cluster and did not see these errors. So I think this
> might not be related to the memory setting..
>
> Thanks,
> Forest
>
> On Aug 24, 2014, at 10:27 AM, Jonathan Haddad <j...@jonhaddad.com> wrote:
>
> > Could you be hitting this?
> https://issues.apache.org/jira/browse/SPARK-3178
> >
> > On Sun, Aug 24, 2014 at 10:21 AM, Forest D <dev24a...@gmail.com> wrote:
> >> Hi folks,
> >>
> >> I have been trying to run the AMPLab’s twitter streaming example
> >> (
> http://ampcamp.berkeley.edu/big-data-mini-course/realtime-processing-with-spark-streaming.html
> )
> >> in the last 2 days.I have encountered the same error messages as shown
> >> below:
> >>
> >> 14/08/24 17:14:22 ERROR client.AppClient$ClientActor: All masters are
> >> unresponsive! Giving up.
> >> 14/08/24 17:14:22 ERROR cluster.SparkDeploySchedulerBackend: Spark
> cluster
> >> looks dead, giving up.
> >> [error] (Thread-39) org.apache.spark.SparkException: Job aborted: Spark
> >> cluster looks down
> >> org.apache.spark.SparkException: Job aborted: Spark cluster looks down
> >>    at
> >>
> org.apache.spark.scheduler.DAGScheduler$$anonfun$org$apache$spark$scheduler$DAGScheduler$$abortStage$1.apply(DAGScheduler.scala:1028)
> >>    at
> >>
> org.apache.spark.scheduler.DAGScheduler$$anonfun$org$apache$spark$scheduler$DAGScheduler$$abortStage$1.apply(DAGScheduler.scala:1026)
> >>    at
> >>
> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
> >>    at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
> >>    at
> >> org.apache.spark.scheduler.DAGScheduler.org
> $apache$spark$scheduler$DAGScheduler$$abortStage(DAGScheduler.scala:1026)
> >>    at
> >>
> org.apache.spark.scheduler.DAGScheduler$$anonfun$processEvent$10.apply(DAGScheduler.scala:619)
> >>    at
> >>
> org.apache.spark.scheduler.DAGScheduler$$anonfun$processEvent$10.apply(DAGScheduler.scala:619)
> >>    at scala.Option.foreach(Option.scala:236)
> >>    at
> >>
> org.apache.spark.scheduler.DAGScheduler.processEvent(DAGScheduler.scala:619)
> >>    at
> >>
> org.apache.spark.scheduler.DAGScheduler$$anonfun$start$1$$anon$2$$anonfun$receive$1.applyOrElse(DAGScheduler.scala:207)
> >>    at akka.actor.ActorCell.receiveMessage(ActorCell.scala:498)
> >>    at akka.actor.ActorCell.invoke(ActorCell.scala:456)
> >>    at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:237)
> >>    at akka.dispatch.Mailbox.run(Mailbox.scala:219)
> >>    at
> >>
> akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:386)
> >>    at
> scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:262)
> >>    at
> >>
> scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:975)
> >>    at
> >> scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1478)
> >>    at
> >>
> scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:104)
> >> [trace] Stack trace suppressed: run last compile:run for the full
> output.
> >> -------------------------------------------
> >> Time: 1408900463000 ms
> >> -------------------------------------------
> >>
> >> 14/08/24 17:14:23 WARN scheduler.TaskSchedulerImpl: Initial job has not
> >> accepted any resources; check your cluster UI to ensure that workers are
> >> registered and have sufficient memory
> >> -------------------------------------------
> >> Time: 1408900464000 ms
> >> -------------------------------------------
> >>
> >> -------------------------------------------
> >> Time: 1408900465000 ms
> >> -------------------------------------------
> >>
> >> -------------------------------------------
> >> Time: 1408900466000 ms
> >> -------------------------------------------
> >>
> >> -------------------------------------------
> >> Time: 1408900467000 ms
> >> -------------------------------------------
> >>
> >> -------------------------------------------
> >> Time: 1408900468000 ms
> >> -------------------------------------------
> >>
> >> -------------------------------------------
> >> Time: 1408900469000 ms
> >> -------------------------------------------
> >>
> >> -------------------------------------------
> >> Time: 1408900470000 ms
> >> -------------------------------------------
> >>
> >> -------------------------------------------
> >> Time: 1408900471000 ms
> >> -------------------------------------------
> >>
> >> -------------------------------------------
> >> Time: 1408900472000 ms
> >> -------------------------------------------
> >>
> >> -------------------------------------------
> >> Time: 1408900473000 ms
> >> -------------------------------------------
> >>
> >> -------------------------------------------
> >> Time: 1408900474000 ms
> >> -------------------------------------------
> >>
> >> -------------------------------------------
> >> Time: 1408900475000 ms
> >> -------------------------------------------
> >>
> >> -------------------------------------------
> >> Time: 1408900476000 ms
> >> -------------------------------------------
> >>
> >> -------------------------------------------
> >> Time: 1408900477000 ms
> >> -------------------------------------------
> >>
> >> -------------------------------------------
> >> Time: 1408900478000 ms
> >> -------------------------------------------
> >>
> >> 14/08/24 17:14:38 WARN scheduler.TaskSchedulerImpl: Initial job has not
> >> accepted any resources; check your cluster UI to ensure that workers are
> >> registered and have sufficient memory
> >> -------------------------------------------
> >> Time: 1408900479000 ms
> >> -------------------------------------------
> >>
> >> -------------------------------------------
> >> Time: 1408900480000 ms
> >> -------------------------------------------
> >>
> >> -------------------------------------------
> >> Time: 1408900481000 ms
> >> -------------------------------------------
> >>
> >> -------------------------------------------
> >> Time: 1408900482000 ms
> >> -------------------------------------------
> >>
> >> 14/08/24 17:14:42 ERROR client.AppClient$ClientActor: All masters are
> >> unresponsive! Giving up.
> >> -------------------------------------------
> >> Time: 1408900483000 ms
> >> -------------------------------------------
> >>
> >> -------------------------------------------
> >> Time: 1408900484000 ms
> >> -------------------------------------------
> >>
> >>
> >> I checked my cluster status and found 0 memory is used..
> >>
> >> Workers: 5
> >> Cores: 20 Total, 0 Used
> >> Memory: 68.2 GB Total, 0.0 B Used
> >> Applications: 0 Running, 0 Completed
> >> Drivers: 0 Running, 0 Completed
> >>
> >> Anyone can shed some light on this issue?
> >>
> >> Thanks,
> >> Senhua
> >
> >
> >
> > --
> > Jon Haddad
> > http://www.rustyrazorblade.com
> > twitter: rustyrazorblade
> >
> > ---------------------------------------------------------------------
> > To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
> > For additional commands, e-mail: user-h...@spark.apache.org
> >
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
> For additional commands, e-mail: user-h...@spark.apache.org
>
>

Reply via email to