Try Increasing the spark worker memory in conf/spark-env.sh export SPARK_WORKER_MEMORY=2g
Thanks, Madhu. Ratika Prasad <rprasad@couponsi nc.com> To "dev@spark.apache.org" 08/19/2015 09:22 <dev@spark.apache.org> PM cc Subject Unable to run the spark application in standalone cluster mode Hi , We have a simple spark application which is running through when run locally on master node as below ./bin/spark-submit --class com.coupons.salestransactionprocessor.SalesTransactionDataPointCreation --master local sales-transaction-processor-0.0.1-SNAPSHOT-jar-with-dependencies.jar But however I try to run it in cluster mode [ our spark cluster has two nodes one master and one slave with executer memory of 512MB], the application fails with the below, Pls provide some inputs as to why? 15/08/19 15:37:52 INFO client.AppClient$ClientActor: Executor updated: app-20150819153234-0001/8 is now RUNNING 15/08/19 15:37:56 WARN scheduler.TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient memory 15/08/19 15:38:11 WARN scheduler.TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient memory 15/08/19 15:38:26 WARN scheduler.TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient memory 15/08/19 15:38:32 INFO client.AppClient$ClientActor: Executor updated: app-20150819153234-0001/8 is now EXITED (Command exited with code 1) 15/08/19 15:38:32 INFO cluster.SparkDeploySchedulerBackend: Executor app-20150819153234-0001/8 removed: Command exited with code 1 15/08/19 15:38:32 INFO client.AppClient$ClientActor: Executor added: app-20150819153234-0001/9 on worker-20150812111932-ip-172-28-161-173.us-west-2.compute.internal-50108 (ip-172-28-161-173.us-west-2.compute.internal:50108) with 1 cores 15/08/19 15:38:32 INFO cluster.SparkDeploySchedulerBackend: Granted executor ID app-20150819153234-0001/9 on hostPort ip-172-28-161-173.us-west-2.compute.internal:50108 with 1 cores, 512.0 MB RAM 15/08/19 15:38:32 INFO client.AppClient$ClientActor: Executor updated: app-20150819153234-0001/9 is now RUNNING 15/08/19 15:38:41 WARN scheduler.TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient memory 15/08/19 15:38:56 WARN scheduler.TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient memory 15/08/19 15:39:11 WARN scheduler.TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient memory 15/08/19 15:39:12 INFO client.AppClient$ClientActor: Executor updated: app-20150819153234-0001/9 is now EXITED (Command exited with code 1) 15/08/19 15:39:12 INFO cluster.SparkDeploySchedulerBackend: Executor app-20150819153234-0001/9 removed: Command exited with code 1 15/08/19 15:39:12 ERROR cluster.SparkDeploySchedulerBackend: Application has been killed. Reason: Master removed our application: FAILED 15/08/19 15:39:12 INFO scheduler.TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool 15/08/19 15:39:12 INFO handler.ContextHandler: stopped o.e.j.s.ServletContextHandler{/metrics/json,null} 15/08/19 15:39:12 INFO handler.ContextHandler: stopped o.e.j.s.ServletContextHandler{/stages/stage/kill,null} 15/08/19 15:39:12 INFO handler.ContextHandler: stopped o.e.j.s.ServletContextHandler{/,null} 15/08/19 15:39:12 INFO handler.ContextHandler: stopped o.e.j.s.ServletContextHandler{/static,null} 15/08/19 15:39:12 INFO handler.ContextHandler: stopped o.e.j.s.ServletContextHandler{/executors/json,null} 15/08/19 15:39:12 INFO handler.ContextHandler: stopped o.e.j.s.ServletContextHandler{/executors,null} 15/08/19 15:39:12 INFO handler.ContextHandler: stopped o.e.j.s.ServletContextHandler{/environment/json,null} 15/08/19 15:39:12 INFO handler.ContextHandler: stopped o.e.j.s.ServletContextHandler{/environment,null} 15/08/19 15:39:12 INFO handler.ContextHandler: stopped o.e.j.s.ServletContextHandler{/storage/rdd/json,null} 15/08/19 15:39:12 INFO handler.ContextHandler: stopped o.e.j.s.ServletContextHandler{/storage/rdd,null} 15/08/19 15:39:12 INFO handler.ContextHandler: stopped o.e.j.s.ServletContextHandler{/storage/json,null} 15/08/19 15:39:12 INFO handler.ContextHandler: stopped o.e.j.s.ServletContextHandler{/storage,null} 15/08/19 15:39:12 INFO handler.ContextHandler: stopped o.e.j.s.ServletContextHandler{/stages/pool/json,null} 15/08/19 15:39:12 INFO handler.ContextHandler: stopped o.e.j.s.ServletContextHandler{/stages/pool,null} 15/08/19 15:39:12 INFO handler.ContextHandler: stopped o.e.j.s.ServletContextHandler{/stages/stage/json,null} 15/08/19 15:39:12 INFO handler.ContextHandler: stopped o.e.j.s.ServletContextHandler{/stages/stage,null} 15/08/19 15:39:12 INFO handler.ContextHandler: stopped o.e.j.s.ServletContextHandler{/stages/json,null} 15/08/19 15:39:12 INFO handler.ContextHandler: stopped o.e.j.s.ServletContextHandler{/stages,null} 15/08/19 15:39:12 INFO scheduler.TaskSchedulerImpl: Cancelling stage 0 15/08/19 15:39:12 INFO scheduler.DAGScheduler: Failed to run count at SalesTransactionDataPointCreation.java:29 Exception in thread "main" org.apache.spark.SparkException: Job aborted due to stage failure: Master removed our application: FAILED at org.apache.spark.scheduler.DAGScheduler.org$apache$spark $scheduler$DAGScheduler$$failJobAndIndependentStages (DAGScheduler.scala:1185) at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage $1.apply(DAGScheduler.scala:1174) at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage $1.apply(DAGScheduler.scala:1173) at scala.collection.mutable.ResizableArray$class.foreach (ResizableArray.scala:59) at scala.collection.mutable.ArrayBuffer.foreach (ArrayBuffer.scala:47) at org.apache.spark.scheduler.DAGScheduler.abortStage (DAGScheduler.scala:1173) at org.apache.spark.scheduler.DAGScheduler$$anonfun $handleTaskSetFailed$1.apply(DAGScheduler.scala:688) at org.apache.spark.scheduler.DAGScheduler$$anonfun $handleTaskSetFailed$1.apply(DAGScheduler.scala:688) at scala.Option.foreach(Option.scala:236) at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed (DAGScheduler.scala:688) at org.apache.spark.scheduler.DAGSchedulerEventProcessActor$ $anonfun$receive$2.applyOrElse(DAGScheduler.scala:1391) at akka.actor.ActorCell.receiveMessage(ActorCell.scala:498) at akka.actor.ActorCell.invoke(ActorCell.scala:456) at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:237) at akka.dispatch.Mailbox.run(Mailbox.scala:219) at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec (AbstractDispatcher.scala:386) at scala.concurrent.forkjoin.ForkJoinTask.doExec (ForkJoinTask.java:260) at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask (ForkJoinPool.java:1339) at scala.concurrent.forkjoin.ForkJoinPool.runWorker (ForkJoinPool.java:1979) at scala.concurrent.forkjoin.ForkJoinWorkerThread.run (ForkJoinWorkerThread.java:107) 15/08/19 15:39:12 WARN thread.QueuedThreadPool: 8 threads could not be stopped 15/08/19 15:39:12 INFO ui.SparkUI: Stopped Spark web UI at http://172.28.161.131:4040 15/08/19 15:39:12 INFO scheduler.DAGScheduler: Stopping DAGScheduler 15/08/19 15:39:12 INFO cluster.SparkDeploySchedulerBackend: Shutting down all executors 15/08/19 15:39:12 INFO cluster.SparkDeploySchedulerBackend: Asking each executor to shut down Thanks R