Found this thread that talked about it to help understand it better:

https://mail-archives.apache.org/mod_mbox/mesos-user/201507.mbox/%3ccajq68qf9pejgnwomasm2dqchyaxpcaovnfkfgggxxpzj2jo...@mail.gmail.com%3E

>
> When you run Spark on Mesos it needs to run
>
> spark driver
> mesos scheduler
>
> and both need to be visible to outside world on public iface IP
>
> you need to tell Spark and Mesos on which interface to bind - by default
> they resolve node hostname to ip - this is loopback address in your case
>
> Possible solutions - on slave node with public IP 192.168.56.50
>
> 1. Set
>
>    export LIBPROCESS_IP=192.168.56.50
>    export SPARK_LOCAL_IP=192.168.56.50
>
> 2. Ensure your hostname resolves to public iface IP - (for testing) edit
> /etc/hosts to resolve your domain name to 192.168.56.50
> 3. Set correct hostname/ip in mesos configuration - see Nikolaos answer
>

Cheers,
Aaron

On Wed, Dec 16, 2015 at 11:00 AM, Iulian Dragoș
<iulian.dra...@typesafe.com> wrote:
> Hi Aaron,
>
> I never had to use that variable. What is it for?
>
> On Wed, Dec 16, 2015 at 2:00 PM, Aaron <aarongm...@gmail.com> wrote:
>>
>> In going through running various Spark jobs, both Spark 1.5.2 and the
>> new Spark 1.6 SNAPSHOTs, on a Mesos cluster (currently 0.25), we
>> noticed that is in order to run the Spark shells (both python and
>> scala), we needed to set the LIBPROCESS_IP environment variable before
>> running.
>>
>> Was curious if the Spark on Mesos docs should be updated, under the
>> Client Mode section, to include setting this environment variable?
>>
>> Cheers
>> Aaron
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
>> For additional commands, e-mail: dev-h...@spark.apache.org
>>
>
>
>
> --
>
> --
> Iulian Dragos
>
> ------
> Reactive Apps on the JVM
> www.typesafe.com
>

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
For additional commands, e-mail: dev-h...@spark.apache.org

Reply via email to