RE: Is there a way to force spark to use specific ips?

2014-12-07 Thread Ashic Mahtab
Hi Matt,That's what I'm seeing too. I've reverted to creating a fact in the 
vagrantfile + adding host in puppet. Save's from having to have the vagrant 
plugin installed. Vagrant-hosts looks interesting for scenarios where I control 
all the machines.
Cheers,Ashic.

Subject: Re: Is there a way to force spark to use specific ips?
From: matt.narr...@gmail.com
Date: Sat, 6 Dec 2014 16:34:13 -0700
CC: user@spark.apache.org
To: as...@live.com

Its much easier if you access your nodes by name.  If you’re using Vagrant, use 
the hosts provisioner:  https://github.com/adrienthebo/vagrant-hosts
mn
On Dec 6, 2014, at 8:37 AM, Ashic Mahtab  wrote:Hi,It appears 
that spark is always attempting to use the driver's hostname to connect / 
broadcast. This is usually fine, except when the cluster doesn't have DNS 
configured. For example, in a vagrant cluster with a private network. The 
workers and masters, and the host (where the driver runs from) can all see each 
other by ip. I can also specify --conf "spark.driver.host=192.168.40.1", and 
that results in the workers being able to connect to the driver. However, when 
trying to broadcast anything, it's still trying to use the hostname of the 
host. Now, I can set up a host entry in etc/hosts, but was wondering if there's 
a way to not require the hassle. Is there any way I can force spark to always 
use ips and not hostnames?
Thanks,
Ashic.
  

Re: Is there a way to force spark to use specific ips?

2014-12-06 Thread Matt Narrell
Its much easier if you access your nodes by name.  If you’re using Vagrant, use 
the hosts provisioner:  https://github.com/adrienthebo/vagrant-hosts 


mn

> On Dec 6, 2014, at 8:37 AM, Ashic Mahtab  wrote:
> 
> Hi,
> It appears that spark is always attempting to use the driver's hostname to 
> connect / broadcast. This is usually fine, except when the cluster doesn't 
> have DNS configured. For example, in a vagrant cluster with a private 
> network. The workers and masters, and the host (where the driver runs from) 
> can all see each other by ip. I can also specify --conf 
> "spark.driver.host=192.168.40.1", and that results in the workers being able 
> to connect to the driver. However, when trying to broadcast anything, it's 
> still trying to use the hostname of the host. Now, I can set up a host entry 
> in etc/hosts, but was wondering if there's a way to not require the hassle. 
> Is there any way I can force spark to always use ips and not hostnames?
> 
> Thanks,
> Ashic.



Is there a way to force spark to use specific ips?

2014-12-06 Thread Ashic Mahtab
Hi,It appears that spark is always attempting to use the driver's hostname to 
connect / broadcast. This is usually fine, except when the cluster doesn't have 
DNS configured. For example, in a vagrant cluster with a private network. The 
workers and masters, and the host (where the driver runs from) can all see each 
other by ip. I can also specify --conf "spark.driver.host=192.168.40.1", and 
that results in the workers being able to connect to the driver. However, when 
trying to broadcast anything, it's still trying to use the hostname of the 
host. Now, I can set up a host entry in etc/hosts, but was wondering if there's 
a way to not require the hassle. Is there any way I can force spark to always 
use ips and not hostnames?
Thanks,
Ashic.