Hi,

I'm trying to run a spark application with the executor-memory 3G. but I'm
running into the following error:

14/08/05 18:02:58 INFO DAGScheduler: Submitting Stage 0 (MappedRDD[5]
at map at KMeans.scala:123), which has no missing parents
14/08/05 18:02:58 INFO DAGScheduler: Submitting 1 missing tasks from
Stage 0 (MappedRDD[5] at map at KMeans.scala:123)
14/08/05 18:02:58 INFO YarnClusterScheduler: Adding task set 0.0 with 1 tasks
14/08/05 18:02:59 INFO CoarseGrainedSchedulerBackend: Registered
executor: 
Actor[akka.tcp://sparkexecu...@test-hadoop2.vpc.natero.com:54358/user/Executor#1670455157]
with ID 2
14/08/05 18:02:59 INFO BlockManagerInfo: Registering block manager
test-hadoop2.vpc.natero.com:39156 with 1766.4 MB RAM
14/08/05 18:03:13 WARN YarnClusterScheduler: Initial job has not
accepted any resources; check your cluster UI to ensure that workers
are registered and have sufficient memory
14/08/05 18:03:28 WARN YarnClusterScheduler: Initial job has not
accepted any resources; check your cluster UI to ensure that workers
are registered and have sufficient memory
14/08/05 18:03:43 WARN YarnClusterScheduler: Initial job has not
accepted any resources; check your cluster UI to ensure that workers
are registered and have sufficient memory
14/08/05 18:03:58 WARN YarnClusterScheduler: Initial job has not
accepted any resources; check your cluster UI to ensure that workers
are registered and have sufficient memory


Tried tweaking executor-memory as well, but same result. It always
gets stuck registering the block manager.


Are there any other settings that needs to be adjusted.


Thanks

Sunny

Reply via email to