The problem is resolved.I have added SPARK_LOCAL_IP=master in both slaves
also.When i changed this my slaves are working.
Thank you all for your suggestions
Thanks Regards,
Meethu M
On Wednesday, 2 July 2014 10:22 AM, Aaron Davidson ilike...@gmail.com wrote:
In your spark-env.sh, do you
Hi,
I did netstat -na | grep 192.168.125.174 and its showing 192.168.125.174:7077
LISTEN(after starting master)
I tried to execute the following script from the slaves manually but it ends up
with the same exception and log.This script is internally executing the java
command.
In your spark-env.sh, do you happen to set SPARK_PUBLIC_DNS or something of
that kin? This error suggests the worker is trying to bind a server on the
master's IP, which clearly doesn't make sense
On Mon, Jun 30, 2014 at 11:59 PM, MEETHU MATHEW meethu2...@yahoo.co.in
wrote:
Hi,
I did
Hi all,
I reinstalled spark,reboot the system,but still I am not able to start the
workers.Its throwing the following exception:
Exception in thread main org.jboss.netty.channel.ChannelException: Failed to
bind to: master/192.168.125.174:0
I doubt the problem is with 192.168.125.174:0.
Hi Akhil,
The IP is correct and is able to start the workers when we start it as a java
command.Its becoming 192.168.125.174:0 when we call from the scripts.
Thanks Regards,
Meethu M
On Friday, 27 June 2014 1:49 PM, Akhil Das ak...@sigmoidanalytics.com wrote:
why is it binding to