Re: Fwd: seeing this message repeatedly.

2016-09-05 Thread Radoslaw Gruchalski
All your workers go via public IP. Do you have the ports opened? Why public
IP? Is it not better to use the private 10.x address?

–
Best regards,
Radek Gruchalski
ra...@gruchalski.com


On September 5, 2016 at 11:49:30 AM, kant kodali (kanth...@gmail.com) wrote:



-- Forwarded message --
From: kant kodali 
Date: Sat, Sep 3, 2016 at 5:39 PM
Subject: seeing this message repeatedly.
To: "user @spark" 



Hi Guys,

I am running my driver program on my local machine and my spark cluster is
on AWS. The big question is I don't know what are the right settings to get
around this public and private ip thing on AWS? my spark-env.sh currently
has the the following lines

export SPARK_PUBLIC_DNS="52.44.36.224"
export SPARK_WORKER_CORES=12
export SPARK_MASTER_OPTS="-Dspark.deploy.defaultCores=4"

I am seeing the lines below when I run my driver program on my local
machine. not sure what is going on ?



16/09/03 17:32:15 INFO DAGScheduler: Submitting 50 missing tasks from
ShuffleMapStage 0 (MapPartitionsRDD[1] at start at Consumer.java:41)
16/09/03 17:32:15 INFO TaskSchedulerImpl: Adding task set 0.0 with 50 tasks
16/09/03 17:32:30 WARN TaskSchedulerImpl: Initial job has not accepted any
resources; check your cluster UI to ensure that workers are registered and
have sufficient resources
16/09/03 17:32:45 WARN TaskSchedulerImpl: Initial job has not accepted any
resources; check your cluster UI to ensure that workers are registered and
have sufficient resources


Fwd: seeing this message repeatedly.

2016-09-05 Thread kant kodali

-- Forwarded message --
From: kant kodali  
Date: Sat, Sep 3, 2016 at 5:39 PM
Subject: seeing this message repeatedly.
To: "user @spark" 



Hi Guys,
I am running my driver program on my local machine and my spark cluster is on
AWS. The big question is I don't know what are the right settings to get around
this public and private ip thing on AWS? my spark-env.sh currently has the the
following lines
export  SPARK_PUBLIC_DNS="52.44.36.224"export  SPARK_WORKER_CORES=12export 
SPARK_MASTER_OPTS="-Dspark.deploy.defaultCores=4"

I am seeing the lines below when I run my driver program on my local machine.
not sure what is going on ?


16/09/03 17:32:15 INFO DAGScheduler: Submitting 50 missing tasks from
ShuffleMapStage 0 (MapPartitionsRDD[1] at start at Consumer.java:41)16/09/03
17:32:15 INFO TaskSchedulerImpl: Adding task set 0.0 with 50 tasks16/09/03
17:32:30 WARN TaskSchedulerImpl: Initial job has not accepted any resources;
check your cluster UI to ensure that workers are registered and have sufficient
resources16/09/03 17:32:45 WARN TaskSchedulerImpl: Initial job has not accepted
any resources; check your cluster UI to ensure that workers are registered and
have sufficient resources