This happens to me when using the EC2 scripts for v1.0.0rc2 recent release.
The Master connects and then disconnects immediately, eventually saying
Master disconnected from cluster.


On Thu, Apr 24, 2014 at 4:01 PM, Matei Zaharia <matei.zaha...@gmail.com>wrote:

> Did you launch this using our EC2 scripts (
> http://spark.apache.org/docs/latest/ec2-scripts.html) or did you manually
> set up the daemons? My guess is that their hostnames are not being resolved
> properly on all nodes, so executor processes can’t connect back to your
> driver app. This error message indicates that:
>
> 14/04/24 09:00:49 WARN util.Utils: Your hostname, spark-node resolves to a
> loopback address: 127.0.0.1; using 10.74.149.251 instead (on interface
> eth0)
> 14/04/24 09:00:49 WARN util.Utils: Set SPARK_LOCAL_IP if you need to bind
> to
> another address
>
> If you launch with your EC2 scripts, or don’t manually change the
> hostnames, this should not happen.
>
> Matei
>
> On Apr 24, 2014, at 11:36 AM, John King <usedforprinting...@gmail.com>
> wrote:
>
> Same problem.
>
>
> On Thu, Apr 24, 2014 at 10:54 AM, Shubhabrata <mail2shu...@gmail.com>wrote:
>
>> Moreover it seems all the workers are registered and have sufficient
>> memory
>> (2.7GB where as I have asked for 512 MB). The UI also shows the jobs are
>> running on the slaves. But on the termial it is still the same error
>> "Initial job has not accepted any resources; check your cluster UI to
>> ensure
>> that workers are registered and have sufficient memory"
>>
>> Please see the screenshot. Thanks
>>
>> <http://apache-spark-user-list.1001560.n3.nabble.com/file/n4761/33.png>
>>
>>
>>
>> --
>> View this message in context:
>> http://apache-spark-user-list.1001560.n3.nabble.com/Deploying-a-python-code-on-a-spark-EC2-cluster-tp4758p4761.html
>> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>>
>
>
>

Reply via email to