Re: Problems connecting with Mesos Master

2015-07-28 Thread Ondrej Smola
Hi Haripiya,

When you run Spark on Mesos it needs to run

spark driver
mesos scheduler

and both need to be visible to outside world on public iface IP

you need to tell Spark and Mesos on which interface to bind - by default
they resolve node hostname to ip - this is loopback address in your case

Possible solutions - on slave node with public IP 192.168.56.50

1. Set

   export LIBPROCESS_IP=192.168.56.50
   export SPARK_LOCAL_IP=192.168.56.50

2. Ensure your hostname resolves to public iface IP - (for testing) edit
/etc/hosts to resolve your domain name to 192.168.56.50
3. Set correct hostname/ip in mesos configuration - see Nikolaos answer




2015-07-29 1:04 GMT+02:00 Haripriya Ayyalasomayajula :

> It quits before it writes any logs, when I look at the directory for logs,
> it has empty files!
>
> On Tue, Jul 28, 2015 at 2:34 PM, Vinod Kone  wrote:
>
>> can you paste the logs?
>>
>> On Tue, Jul 28, 2015 at 2:31 PM, Haripriya Ayyalasomayajula <
>> aharipriy...@gmail.com> wrote:
>>
>>> Well, when I try doing that I get this error:
>>>
>>> Failed to initialize: Failed to bind on scheduler_ip_address Cannot
>>> assign requested address: Cannot assign requested address [99]
>>>
>>> When I do a
>>>
>>>  ps -ef | grep mesos
>>>
>>> on both my master and slave nodes, it works fine. And, I am also able to
>>> ping both the nodes from each other - they are reachable to one another.
>>>
>>> On Tue, Jul 28, 2015 at 2:18 PM, Vinod Kone  wrote:
>>>
 LIBPROCESS_IP is the IP address that you want the scheduler (driver) to
 bind to. It has nothing to do with the ZooKeeper address.

 In other words, do

 export LIBPROCESS_IP=scheduler_ip_address


 On Tue, Jul 28, 2015 at 2:00 PM, Haripriya Ayyalasomayajula <
 aharipriy...@gmail.com> wrote:

> I am trying to do this
>
> export LIBPROCESS_IP=zk://my_ipaddress:2181/mesos
>
> ./bin/spark-shell
>
> It gives me this error and aborts
>
> WARNING: Logging before InitGoogleLogging() is written to STDERR
>
> F0728 15:43:39.361445 13209 process.cpp:847] Parsing
> LIBPROCESS_IP=zk://my_ipaddress:2181/ failed: Failed to parse the IP
>
> On Tue, Jul 28, 2015 at 1:51 PM, Tim Chen  wrote:
>
>> spark-env.sh works as it will be called by spark-submit/spark-shell,
>> or you can just set it before you call spark-shell yourself.
>>
>> Tim
>>
>> On Tue, Jul 28, 2015 at 1:43 PM, Haripriya Ayyalasomayajula <
>> aharipriy...@gmail.com> wrote:
>>
>>> Hi,
>>>
>>> Where can I set the libprocess_ip env variable? spark_env.sh? Thats
>>> the only place I can think of. Can you please point me to any related
>>> documentation?
>>>
>>> On Tue, Jul 28, 2015 at 12:46 PM, Nikolaos Ballas neXus <
>>> nikolaos.bal...@nexusgroup.com> wrote:
>>>
  If you are not using any dns like service  under
 /etc/mesos-master/ create two files called ip  and hostname  and put 
 the ip
 of the eth  interface.



  Sent from my Samsung device


  Original message 
 From: Haripriya Ayyalasomayajula 
 Date: 28/07/2015 20:18 (GMT+01:00)
 To: user@mesos.apache.org
 Subject: Problems connecting with Mesos Master

  Hi all,

  I am trying to use Spark 1.4.1 with Mesos 0.23.0.

  When I try to start my spark-shell, it gives me the following
 warning :

 **

 Scheduler driver bound to loopback interface! Cannot communicate
 with remote master(s). You might want to set 'LIBPROCESS_IP' 
 environment
 variable to use a routable IP address.
 ---

  Spark-shell works fine on the node where I run master, but if I
 start running on any of the other slave nodes it gives me the following
 error:

  E0728 11:22:53.176515 10503 socket.hpp:107] Shutdown failed on
 fd=6: Transport endpoint is not connected [107]

 E0728 11:22:53.210146 10503 socket.hpp:107] Shutdown failed on
 fd=6: Transport endpoint is not connected [107]

 I have the following configs:


- zookeeper configured to the mesos master
- /etc/mesos/zk on all nodes pointing to mesos master ip.

  I am not sure if I have to set the ip flag and where I have to
 set the --ip flag?

  --
   Regards,
 Haripriya Ayyalasomayajula


>>>
>>>
>>> --
>>> Regards,
>>> Haripriya Ayyalasomayajula
>>>
>>>
>>
>
>
> --
> Regards,
> Haripriya Ayyalasomayajula
>
>

>>>
>>>
>>> --
>>> Regards,
>>> Haripriya Ayyalasomayajula
>>>
>>>
>>
>
>
> --
> Regards,
> Haripri

SharedFilesystemIsolator (filesystem/shared)

2015-07-28 Thread Jie Yu
Hi Mesos users,

I am wondering if anyone is using this isolator (i.e.,
--isolation=filesystem/shared)? If not, we plan to remove it from the
source code in favor of using the upcoming more general linux filesystem
isolator (https://reviews.apache.org/r/36429/).

- Jie


Re: Spark on Mesos: link to Spark UI is not active

2015-07-28 Thread Philip Weaver
For me, it's 0.23.0 and 1.4.0, respectively.

On Tue, Jul 28, 2015 at 4:08 PM, Adam Bordelon  wrote:

> Spark should be setting the FrameworkInfo.webui_url on scheduler startup.
>
> https://github.com/apache/spark/blob/v1.4.1/core/src/main/scala/org/apache/spark/deploy/mesos/MesosClusterDispatcher.scala#L76
> What versions of Mesos and Spark are you using?
>
> On Tue, Jul 28, 2015 at 3:40 PM, Philip Weaver 
> wrote:
>
>> I also have this problem, thanks!
>>
>> On Tue, Jul 28, 2015 at 3:34 PM, Anton Kirillov <
>> antonv.kiril...@gmail.com> wrote:
>>
>>>  Hi everyone,
>>>
>>> I’m trying to get access to Spark web UI from Mesos Master but with no
>>> success: the host name displayed properly, but the link is not active, just
>>> text. Maybe it’s a well-known issue or I misconfigured something, but this
>>> problem is really annoying.
>>>
>>> When running spark-submit in client mode framework is registered
>>> properly and Spark UI is available on client node but in Master’s interface
>>> hostname is just text. When launching in cluster mode (dispatcher is
>>> already launched) there’s only drivers information available with no
>>> reference to driver UI.
>>>
>>> Maybe I miss something, but really appreciate any help!
>>>
>>> --
>>> Anton Kirillov
>>> Sent with Sparrow 
>>>
>>>
>>
>


Re: Spark on Mesos: link to Spark UI is not active

2015-07-28 Thread Adam Bordelon
Spark should be setting the FrameworkInfo.webui_url on scheduler startup.
https://github.com/apache/spark/blob/v1.4.1/core/src/main/scala/org/apache/spark/deploy/mesos/MesosClusterDispatcher.scala#L76
What versions of Mesos and Spark are you using?

On Tue, Jul 28, 2015 at 3:40 PM, Philip Weaver 
wrote:

> I also have this problem, thanks!
>
> On Tue, Jul 28, 2015 at 3:34 PM, Anton Kirillov  > wrote:
>
>>  Hi everyone,
>>
>> I’m trying to get access to Spark web UI from Mesos Master but with no
>> success: the host name displayed properly, but the link is not active, just
>> text. Maybe it’s a well-known issue or I misconfigured something, but this
>> problem is really annoying.
>>
>> When running spark-submit in client mode framework is registered properly
>> and Spark UI is available on client node but in Master’s interface hostname
>> is just text. When launching in cluster mode (dispatcher is already
>> launched) there’s only drivers information available with no reference to
>> driver UI.
>>
>> Maybe I miss something, but really appreciate any help!
>>
>> --
>> Anton Kirillov
>> Sent with Sparrow 
>>
>>
>


Re: Problems connecting with Mesos Master

2015-07-28 Thread Haripriya Ayyalasomayajula
It quits before it writes any logs, when I look at the directory for logs,
it has empty files!

On Tue, Jul 28, 2015 at 2:34 PM, Vinod Kone  wrote:

> can you paste the logs?
>
> On Tue, Jul 28, 2015 at 2:31 PM, Haripriya Ayyalasomayajula <
> aharipriy...@gmail.com> wrote:
>
>> Well, when I try doing that I get this error:
>>
>> Failed to initialize: Failed to bind on scheduler_ip_address Cannot
>> assign requested address: Cannot assign requested address [99]
>>
>> When I do a
>>
>>  ps -ef | grep mesos
>>
>> on both my master and slave nodes, it works fine. And, I am also able to
>> ping both the nodes from each other - they are reachable to one another.
>>
>> On Tue, Jul 28, 2015 at 2:18 PM, Vinod Kone  wrote:
>>
>>> LIBPROCESS_IP is the IP address that you want the scheduler (driver) to
>>> bind to. It has nothing to do with the ZooKeeper address.
>>>
>>> In other words, do
>>>
>>> export LIBPROCESS_IP=scheduler_ip_address
>>>
>>>
>>> On Tue, Jul 28, 2015 at 2:00 PM, Haripriya Ayyalasomayajula <
>>> aharipriy...@gmail.com> wrote:
>>>
 I am trying to do this

 export LIBPROCESS_IP=zk://my_ipaddress:2181/mesos

 ./bin/spark-shell

 It gives me this error and aborts

 WARNING: Logging before InitGoogleLogging() is written to STDERR

 F0728 15:43:39.361445 13209 process.cpp:847] Parsing
 LIBPROCESS_IP=zk://my_ipaddress:2181/ failed: Failed to parse the IP

 On Tue, Jul 28, 2015 at 1:51 PM, Tim Chen  wrote:

> spark-env.sh works as it will be called by spark-submit/spark-shell,
> or you can just set it before you call spark-shell yourself.
>
> Tim
>
> On Tue, Jul 28, 2015 at 1:43 PM, Haripriya Ayyalasomayajula <
> aharipriy...@gmail.com> wrote:
>
>> Hi,
>>
>> Where can I set the libprocess_ip env variable? spark_env.sh? Thats
>> the only place I can think of. Can you please point me to any related
>> documentation?
>>
>> On Tue, Jul 28, 2015 at 12:46 PM, Nikolaos Ballas neXus <
>> nikolaos.bal...@nexusgroup.com> wrote:
>>
>>>  If you are not using any dns like service  under
>>> /etc/mesos-master/ create two files called ip  and hostname  and put 
>>> the ip
>>> of the eth  interface.
>>>
>>>
>>>
>>>  Sent from my Samsung device
>>>
>>>
>>>  Original message 
>>> From: Haripriya Ayyalasomayajula 
>>> Date: 28/07/2015 20:18 (GMT+01:00)
>>> To: user@mesos.apache.org
>>> Subject: Problems connecting with Mesos Master
>>>
>>>  Hi all,
>>>
>>>  I am trying to use Spark 1.4.1 with Mesos 0.23.0.
>>>
>>>  When I try to start my spark-shell, it gives me the following
>>> warning :
>>>
>>> **
>>>
>>> Scheduler driver bound to loopback interface! Cannot communicate
>>> with remote master(s). You might want to set 'LIBPROCESS_IP' environment
>>> variable to use a routable IP address.
>>> ---
>>>
>>>  Spark-shell works fine on the node where I run master, but if I
>>> start running on any of the other slave nodes it gives me the following
>>> error:
>>>
>>>  E0728 11:22:53.176515 10503 socket.hpp:107] Shutdown failed on
>>> fd=6: Transport endpoint is not connected [107]
>>>
>>> E0728 11:22:53.210146 10503 socket.hpp:107] Shutdown failed on fd=6:
>>> Transport endpoint is not connected [107]
>>>
>>> I have the following configs:
>>>
>>>
>>>- zookeeper configured to the mesos master
>>>- /etc/mesos/zk on all nodes pointing to mesos master ip.
>>>
>>>  I am not sure if I have to set the ip flag and where I have to set
>>> the --ip flag?
>>>
>>>  --
>>>   Regards,
>>> Haripriya Ayyalasomayajula
>>>
>>>
>>
>>
>> --
>> Regards,
>> Haripriya Ayyalasomayajula
>>
>>
>


 --
 Regards,
 Haripriya Ayyalasomayajula


>>>
>>
>>
>> --
>> Regards,
>> Haripriya Ayyalasomayajula
>>
>>
>


-- 
Regards,
Haripriya Ayyalasomayajula


Re: Spark on Mesos: link to Spark UI is not active

2015-07-28 Thread Philip Weaver
I also have this problem, thanks!

On Tue, Jul 28, 2015 at 3:34 PM, Anton Kirillov 
wrote:

>  Hi everyone,
>
> I’m trying to get access to Spark web UI from Mesos Master but with no
> success: the host name displayed properly, but the link is not active, just
> text. Maybe it’s a well-known issue or I misconfigured something, but this
> problem is really annoying.
>
> When running spark-submit in client mode framework is registered properly
> and Spark UI is available on client node but in Master’s interface hostname
> is just text. When launching in cluster mode (dispatcher is already
> launched) there’s only drivers information available with no reference to
> driver UI.
>
> Maybe I miss something, but really appreciate any help!
>
> --
> Anton Kirillov
> Sent with Sparrow 
>
>


Spark on Mesos: link to Spark UI is not active

2015-07-28 Thread Anton Kirillov
Hi everyone,   

I’m trying to get access to Spark web UI from Mesos Master but with no success: 
the host name displayed properly, but the link is not active, just text. Maybe 
it’s a well-known issue or I misconfigured something, but this problem is 
really annoying.

When running spark-submit in client mode framework is registered properly and 
Spark UI is available on client node but in Master’s interface hostname is just 
text. When launching in cluster mode (dispatcher is already launched) there’s 
only drivers information available with no reference to driver UI.

Maybe I miss something, but really appreciate any help!  

--  
Anton Kirillov
Sent with Sparrow (http://www.sparrowmailapp.com/?sig)



Re: Problems connecting with Mesos Master

2015-07-28 Thread Vinod Kone
can you paste the logs?

On Tue, Jul 28, 2015 at 2:31 PM, Haripriya Ayyalasomayajula <
aharipriy...@gmail.com> wrote:

> Well, when I try doing that I get this error:
>
> Failed to initialize: Failed to bind on scheduler_ip_address Cannot assign
> requested address: Cannot assign requested address [99]
>
> When I do a
>
>  ps -ef | grep mesos
>
> on both my master and slave nodes, it works fine. And, I am also able to
> ping both the nodes from each other - they are reachable to one another.
>
> On Tue, Jul 28, 2015 at 2:18 PM, Vinod Kone  wrote:
>
>> LIBPROCESS_IP is the IP address that you want the scheduler (driver) to
>> bind to. It has nothing to do with the ZooKeeper address.
>>
>> In other words, do
>>
>> export LIBPROCESS_IP=scheduler_ip_address
>>
>>
>> On Tue, Jul 28, 2015 at 2:00 PM, Haripriya Ayyalasomayajula <
>> aharipriy...@gmail.com> wrote:
>>
>>> I am trying to do this
>>>
>>> export LIBPROCESS_IP=zk://my_ipaddress:2181/mesos
>>>
>>> ./bin/spark-shell
>>>
>>> It gives me this error and aborts
>>>
>>> WARNING: Logging before InitGoogleLogging() is written to STDERR
>>>
>>> F0728 15:43:39.361445 13209 process.cpp:847] Parsing
>>> LIBPROCESS_IP=zk://my_ipaddress:2181/ failed: Failed to parse the IP
>>>
>>> On Tue, Jul 28, 2015 at 1:51 PM, Tim Chen  wrote:
>>>
 spark-env.sh works as it will be called by spark-submit/spark-shell, or
 you can just set it before you call spark-shell yourself.

 Tim

 On Tue, Jul 28, 2015 at 1:43 PM, Haripriya Ayyalasomayajula <
 aharipriy...@gmail.com> wrote:

> Hi,
>
> Where can I set the libprocess_ip env variable? spark_env.sh? Thats
> the only place I can think of. Can you please point me to any related
> documentation?
>
> On Tue, Jul 28, 2015 at 12:46 PM, Nikolaos Ballas neXus <
> nikolaos.bal...@nexusgroup.com> wrote:
>
>>  If you are not using any dns like service  under /etc/mesos-master/
>> create two files called ip  and hostname  and put the ip of the eth
>>  interface.
>>
>>
>>
>>  Sent from my Samsung device
>>
>>
>>  Original message 
>> From: Haripriya Ayyalasomayajula 
>> Date: 28/07/2015 20:18 (GMT+01:00)
>> To: user@mesos.apache.org
>> Subject: Problems connecting with Mesos Master
>>
>>  Hi all,
>>
>>  I am trying to use Spark 1.4.1 with Mesos 0.23.0.
>>
>>  When I try to start my spark-shell, it gives me the following
>> warning :
>>
>> **
>>
>> Scheduler driver bound to loopback interface! Cannot communicate with
>> remote master(s). You might want to set 'LIBPROCESS_IP' environment
>> variable to use a routable IP address.
>> ---
>>
>>  Spark-shell works fine on the node where I run master, but if I
>> start running on any of the other slave nodes it gives me the following
>> error:
>>
>>  E0728 11:22:53.176515 10503 socket.hpp:107] Shutdown failed on
>> fd=6: Transport endpoint is not connected [107]
>>
>> E0728 11:22:53.210146 10503 socket.hpp:107] Shutdown failed on fd=6:
>> Transport endpoint is not connected [107]
>>
>> I have the following configs:
>>
>>
>>- zookeeper configured to the mesos master
>>- /etc/mesos/zk on all nodes pointing to mesos master ip.
>>
>>  I am not sure if I have to set the ip flag and where I have to set
>> the --ip flag?
>>
>>  --
>>   Regards,
>> Haripriya Ayyalasomayajula
>>
>>
>
>
> --
> Regards,
> Haripriya Ayyalasomayajula
>
>

>>>
>>>
>>> --
>>> Regards,
>>> Haripriya Ayyalasomayajula
>>>
>>>
>>
>
>
> --
> Regards,
> Haripriya Ayyalasomayajula
>
>


Re: Problems connecting with Mesos Master

2015-07-28 Thread Haripriya Ayyalasomayajula
Well, when I try doing that I get this error:

Failed to initialize: Failed to bind on scheduler_ip_address Cannot assign
requested address: Cannot assign requested address [99]

When I do a

 ps -ef | grep mesos

on both my master and slave nodes, it works fine. And, I am also able to
ping both the nodes from each other - they are reachable to one another.

On Tue, Jul 28, 2015 at 2:18 PM, Vinod Kone  wrote:

> LIBPROCESS_IP is the IP address that you want the scheduler (driver) to
> bind to. It has nothing to do with the ZooKeeper address.
>
> In other words, do
>
> export LIBPROCESS_IP=scheduler_ip_address
>
>
> On Tue, Jul 28, 2015 at 2:00 PM, Haripriya Ayyalasomayajula <
> aharipriy...@gmail.com> wrote:
>
>> I am trying to do this
>>
>> export LIBPROCESS_IP=zk://my_ipaddress:2181/mesos
>>
>> ./bin/spark-shell
>>
>> It gives me this error and aborts
>>
>> WARNING: Logging before InitGoogleLogging() is written to STDERR
>>
>> F0728 15:43:39.361445 13209 process.cpp:847] Parsing
>> LIBPROCESS_IP=zk://my_ipaddress:2181/ failed: Failed to parse the IP
>>
>> On Tue, Jul 28, 2015 at 1:51 PM, Tim Chen  wrote:
>>
>>> spark-env.sh works as it will be called by spark-submit/spark-shell, or
>>> you can just set it before you call spark-shell yourself.
>>>
>>> Tim
>>>
>>> On Tue, Jul 28, 2015 at 1:43 PM, Haripriya Ayyalasomayajula <
>>> aharipriy...@gmail.com> wrote:
>>>
 Hi,

 Where can I set the libprocess_ip env variable? spark_env.sh? Thats the
 only place I can think of. Can you please point me to any related
 documentation?

 On Tue, Jul 28, 2015 at 12:46 PM, Nikolaos Ballas neXus <
 nikolaos.bal...@nexusgroup.com> wrote:

>  If you are not using any dns like service  under /etc/mesos-master/
> create two files called ip  and hostname  and put the ip of the eth
>  interface.
>
>
>
>  Sent from my Samsung device
>
>
>  Original message 
> From: Haripriya Ayyalasomayajula 
> Date: 28/07/2015 20:18 (GMT+01:00)
> To: user@mesos.apache.org
> Subject: Problems connecting with Mesos Master
>
>  Hi all,
>
>  I am trying to use Spark 1.4.1 with Mesos 0.23.0.
>
>  When I try to start my spark-shell, it gives me the following
> warning :
>
> **
>
> Scheduler driver bound to loopback interface! Cannot communicate with
> remote master(s). You might want to set 'LIBPROCESS_IP' environment
> variable to use a routable IP address.
> ---
>
>  Spark-shell works fine on the node where I run master, but if I
> start running on any of the other slave nodes it gives me the following
> error:
>
>  E0728 11:22:53.176515 10503 socket.hpp:107] Shutdown failed on fd=6:
> Transport endpoint is not connected [107]
>
> E0728 11:22:53.210146 10503 socket.hpp:107] Shutdown failed on fd=6:
> Transport endpoint is not connected [107]
>
> I have the following configs:
>
>
>- zookeeper configured to the mesos master
>- /etc/mesos/zk on all nodes pointing to mesos master ip.
>
>  I am not sure if I have to set the ip flag and where I have to set
> the --ip flag?
>
>  --
>   Regards,
> Haripriya Ayyalasomayajula
>
>


 --
 Regards,
 Haripriya Ayyalasomayajula


>>>
>>
>>
>> --
>> Regards,
>> Haripriya Ayyalasomayajula
>>
>>
>


-- 
Regards,
Haripriya Ayyalasomayajula


Re: Problems connecting with Mesos Master

2015-07-28 Thread Vinod Kone
LIBPROCESS_IP is the IP address that you want the scheduler (driver) to
bind to. It has nothing to do with the ZooKeeper address.

In other words, do

export LIBPROCESS_IP=scheduler_ip_address


On Tue, Jul 28, 2015 at 2:00 PM, Haripriya Ayyalasomayajula <
aharipriy...@gmail.com> wrote:

> I am trying to do this
>
> export LIBPROCESS_IP=zk://my_ipaddress:2181/mesos
>
> ./bin/spark-shell
>
> It gives me this error and aborts
>
> WARNING: Logging before InitGoogleLogging() is written to STDERR
>
> F0728 15:43:39.361445 13209 process.cpp:847] Parsing
> LIBPROCESS_IP=zk://my_ipaddress:2181/ failed: Failed to parse the IP
>
> On Tue, Jul 28, 2015 at 1:51 PM, Tim Chen  wrote:
>
>> spark-env.sh works as it will be called by spark-submit/spark-shell, or
>> you can just set it before you call spark-shell yourself.
>>
>> Tim
>>
>> On Tue, Jul 28, 2015 at 1:43 PM, Haripriya Ayyalasomayajula <
>> aharipriy...@gmail.com> wrote:
>>
>>> Hi,
>>>
>>> Where can I set the libprocess_ip env variable? spark_env.sh? Thats the
>>> only place I can think of. Can you please point me to any related
>>> documentation?
>>>
>>> On Tue, Jul 28, 2015 at 12:46 PM, Nikolaos Ballas neXus <
>>> nikolaos.bal...@nexusgroup.com> wrote:
>>>
  If you are not using any dns like service  under /etc/mesos-master/
 create two files called ip  and hostname  and put the ip of the eth
  interface.



  Sent from my Samsung device


  Original message 
 From: Haripriya Ayyalasomayajula 
 Date: 28/07/2015 20:18 (GMT+01:00)
 To: user@mesos.apache.org
 Subject: Problems connecting with Mesos Master

  Hi all,

  I am trying to use Spark 1.4.1 with Mesos 0.23.0.

  When I try to start my spark-shell, it gives me the following warning
 :

 **

 Scheduler driver bound to loopback interface! Cannot communicate with
 remote master(s). You might want to set 'LIBPROCESS_IP' environment
 variable to use a routable IP address.
 ---

  Spark-shell works fine on the node where I run master, but if I start
 running on any of the other slave nodes it gives me the following error:

  E0728 11:22:53.176515 10503 socket.hpp:107] Shutdown failed on fd=6:
 Transport endpoint is not connected [107]

 E0728 11:22:53.210146 10503 socket.hpp:107] Shutdown failed on fd=6:
 Transport endpoint is not connected [107]

 I have the following configs:


- zookeeper configured to the mesos master
- /etc/mesos/zk on all nodes pointing to mesos master ip.

  I am not sure if I have to set the ip flag and where I have to set
 the --ip flag?

  --
   Regards,
 Haripriya Ayyalasomayajula


>>>
>>>
>>> --
>>> Regards,
>>> Haripriya Ayyalasomayajula
>>>
>>>
>>
>
>
> --
> Regards,
> Haripriya Ayyalasomayajula
>
>


Re: Problems connecting with Mesos Master

2015-07-28 Thread Haripriya Ayyalasomayajula
I am trying to do this

export LIBPROCESS_IP=zk://my_ipaddress:2181/mesos

./bin/spark-shell

It gives me this error and aborts

WARNING: Logging before InitGoogleLogging() is written to STDERR

F0728 15:43:39.361445 13209 process.cpp:847] Parsing
LIBPROCESS_IP=zk://my_ipaddress:2181/ failed: Failed to parse the IP

On Tue, Jul 28, 2015 at 1:51 PM, Tim Chen  wrote:

> spark-env.sh works as it will be called by spark-submit/spark-shell, or
> you can just set it before you call spark-shell yourself.
>
> Tim
>
> On Tue, Jul 28, 2015 at 1:43 PM, Haripriya Ayyalasomayajula <
> aharipriy...@gmail.com> wrote:
>
>> Hi,
>>
>> Where can I set the libprocess_ip env variable? spark_env.sh? Thats the
>> only place I can think of. Can you please point me to any related
>> documentation?
>>
>> On Tue, Jul 28, 2015 at 12:46 PM, Nikolaos Ballas neXus <
>> nikolaos.bal...@nexusgroup.com> wrote:
>>
>>>  If you are not using any dns like service  under /etc/mesos-master/
>>> create two files called ip  and hostname  and put the ip of the eth
>>>  interface.
>>>
>>>
>>>
>>>  Sent from my Samsung device
>>>
>>>
>>>  Original message 
>>> From: Haripriya Ayyalasomayajula 
>>> Date: 28/07/2015 20:18 (GMT+01:00)
>>> To: user@mesos.apache.org
>>> Subject: Problems connecting with Mesos Master
>>>
>>>  Hi all,
>>>
>>>  I am trying to use Spark 1.4.1 with Mesos 0.23.0.
>>>
>>>  When I try to start my spark-shell, it gives me the following warning :
>>>
>>> **
>>>
>>> Scheduler driver bound to loopback interface! Cannot communicate with
>>> remote master(s). You might want to set 'LIBPROCESS_IP' environment
>>> variable to use a routable IP address.
>>> ---
>>>
>>>  Spark-shell works fine on the node where I run master, but if I start
>>> running on any of the other slave nodes it gives me the following error:
>>>
>>>  E0728 11:22:53.176515 10503 socket.hpp:107] Shutdown failed on fd=6:
>>> Transport endpoint is not connected [107]
>>>
>>> E0728 11:22:53.210146 10503 socket.hpp:107] Shutdown failed on fd=6:
>>> Transport endpoint is not connected [107]
>>>
>>> I have the following configs:
>>>
>>>
>>>- zookeeper configured to the mesos master
>>>- /etc/mesos/zk on all nodes pointing to mesos master ip.
>>>
>>>  I am not sure if I have to set the ip flag and where I have to set the
>>> --ip flag?
>>>
>>>  --
>>>   Regards,
>>> Haripriya Ayyalasomayajula
>>>
>>>
>>
>>
>> --
>> Regards,
>> Haripriya Ayyalasomayajula
>>
>>
>


-- 
Regards,
Haripriya Ayyalasomayajula


Re: Problems connecting with Mesos Master

2015-07-28 Thread Tim Chen
spark-env.sh works as it will be called by spark-submit/spark-shell, or you
can just set it before you call spark-shell yourself.

Tim

On Tue, Jul 28, 2015 at 1:43 PM, Haripriya Ayyalasomayajula <
aharipriy...@gmail.com> wrote:

> Hi,
>
> Where can I set the libprocess_ip env variable? spark_env.sh? Thats the
> only place I can think of. Can you please point me to any related
> documentation?
>
> On Tue, Jul 28, 2015 at 12:46 PM, Nikolaos Ballas neXus <
> nikolaos.bal...@nexusgroup.com> wrote:
>
>>  If you are not using any dns like service  under /etc/mesos-master/
>> create two files called ip  and hostname  and put the ip of the eth
>>  interface.
>>
>>
>>
>>  Sent from my Samsung device
>>
>>
>>  Original message 
>> From: Haripriya Ayyalasomayajula 
>> Date: 28/07/2015 20:18 (GMT+01:00)
>> To: user@mesos.apache.org
>> Subject: Problems connecting with Mesos Master
>>
>>  Hi all,
>>
>>  I am trying to use Spark 1.4.1 with Mesos 0.23.0.
>>
>>  When I try to start my spark-shell, it gives me the following warning :
>>
>> **
>>
>> Scheduler driver bound to loopback interface! Cannot communicate with
>> remote master(s). You might want to set 'LIBPROCESS_IP' environment
>> variable to use a routable IP address.
>> ---
>>
>>  Spark-shell works fine on the node where I run master, but if I start
>> running on any of the other slave nodes it gives me the following error:
>>
>>  E0728 11:22:53.176515 10503 socket.hpp:107] Shutdown failed on fd=6:
>> Transport endpoint is not connected [107]
>>
>> E0728 11:22:53.210146 10503 socket.hpp:107] Shutdown failed on fd=6:
>> Transport endpoint is not connected [107]
>>
>> I have the following configs:
>>
>>
>>- zookeeper configured to the mesos master
>>- /etc/mesos/zk on all nodes pointing to mesos master ip.
>>
>>  I am not sure if I have to set the ip flag and where I have to set the
>> --ip flag?
>>
>>  --
>>   Regards,
>> Haripriya Ayyalasomayajula
>>
>>
>
>
> --
> Regards,
> Haripriya Ayyalasomayajula
>
>


Re: Problems connecting with Mesos Master

2015-07-28 Thread Haripriya Ayyalasomayajula
Hi,

Where can I set the libprocess_ip env variable? spark_env.sh? Thats the
only place I can think of. Can you please point me to any related
documentation?

On Tue, Jul 28, 2015 at 12:46 PM, Nikolaos Ballas neXus <
nikolaos.bal...@nexusgroup.com> wrote:

>  If you are not using any dns like service  under /etc/mesos-master/
> create two files called ip  and hostname  and put the ip of the eth
>  interface.
>
>
>
>  Sent from my Samsung device
>
>
>  Original message 
> From: Haripriya Ayyalasomayajula 
> Date: 28/07/2015 20:18 (GMT+01:00)
> To: user@mesos.apache.org
> Subject: Problems connecting with Mesos Master
>
>  Hi all,
>
>  I am trying to use Spark 1.4.1 with Mesos 0.23.0.
>
>  When I try to start my spark-shell, it gives me the following warning :
>
> **
>
> Scheduler driver bound to loopback interface! Cannot communicate with
> remote master(s). You might want to set 'LIBPROCESS_IP' environment
> variable to use a routable IP address.
> ---
>
>  Spark-shell works fine on the node where I run master, but if I start
> running on any of the other slave nodes it gives me the following error:
>
>  E0728 11:22:53.176515 10503 socket.hpp:107] Shutdown failed on fd=6:
> Transport endpoint is not connected [107]
>
> E0728 11:22:53.210146 10503 socket.hpp:107] Shutdown failed on fd=6:
> Transport endpoint is not connected [107]
>
> I have the following configs:
>
>
>- zookeeper configured to the mesos master
>- /etc/mesos/zk on all nodes pointing to mesos master ip.
>
>  I am not sure if I have to set the ip flag and where I have to set the
> --ip flag?
>
>  --
>   Regards,
> Haripriya Ayyalasomayajula
>
>


-- 
Regards,
Haripriya Ayyalasomayajula


RE: Problems connecting with Mesos Master

2015-07-28 Thread Nikolaos Ballas neXus
If you are not using any dns like service  under /etc/mesos-master/ create two 
files called ip  and hostname  and put the ip of the eth  interface.



Sent from my Samsung device


 Original message 
From: Haripriya Ayyalasomayajula 
Date: 28/07/2015 20:18 (GMT+01:00)
To: user@mesos.apache.org
Subject: Problems connecting with Mesos Master

Hi all,

I am trying to use Spark 1.4.1 with Mesos 0.23.0.

When I try to start my spark-shell, it gives me the following warning :

**

Scheduler driver bound to loopback interface! Cannot communicate with remote 
master(s). You might want to set 'LIBPROCESS_IP' environment variable to use a 
routable IP address.

---

Spark-shell works fine on the node where I run master, but if I start running 
on any of the other slave nodes it gives me the following error:


E0728 11:22:53.176515 10503 socket.hpp:107] Shutdown failed on fd=6: Transport 
endpoint is not connected [107]

E0728 11:22:53.210146 10503 socket.hpp:107] Shutdown failed on fd=6: Transport 
endpoint is not connected [107]

I have the following configs:

  *   zookeeper configured to the mesos master
  *   /etc/mesos/zk on all nodes pointing to mesos master ip.

I am not sure if I have to set the ip flag and where I have to set the --ip 
flag?

--
Regards,
Haripriya Ayyalasomayajula



Re: Problems connecting with Mesos Master

2015-07-28 Thread Vinod Kone
did you set LIBPROCESS_IP env variable as the warning suggested?

On Tue, Jul 28, 2015 at 11:16 AM, Haripriya Ayyalasomayajula <
aharipriy...@gmail.com> wrote:

> Hi all,
>
> I am trying to use Spark 1.4.1 with Mesos 0.23.0.
>
> When I try to start my spark-shell, it gives me the following warning :
>
> **
>
> Scheduler driver bound to loopback interface! Cannot communicate with
> remote master(s). You might want to set 'LIBPROCESS_IP' environment
> variable to use a routable IP address.
> ---
>
> Spark-shell works fine on the node where I run master, but if I start
> running on any of the other slave nodes it gives me the following error:
>
> E0728 11:22:53.176515 10503 socket.hpp:107] Shutdown failed on fd=6:
> Transport endpoint is not connected [107]
>
> E0728 11:22:53.210146 10503 socket.hpp:107] Shutdown failed on fd=6:
> Transport endpoint is not connected [107]
>
> I have the following configs:
>
>
>- zookeeper configured to the mesos master
>- /etc/mesos/zk on all nodes pointing to mesos master ip.
>
> I am not sure if I have to set the ip flag and where I have to set the
> --ip flag?
>
> --
> Regards,
> Haripriya Ayyalasomayajula
>
>


Problems connecting with Mesos Master

2015-07-28 Thread Haripriya Ayyalasomayajula
Hi all,

I am trying to use Spark 1.4.1 with Mesos 0.23.0.

When I try to start my spark-shell, it gives me the following warning :

**

Scheduler driver bound to loopback interface! Cannot communicate with
remote master(s). You might want to set 'LIBPROCESS_IP' environment
variable to use a routable IP address.
---

Spark-shell works fine on the node where I run master, but if I start
running on any of the other slave nodes it gives me the following error:

E0728 11:22:53.176515 10503 socket.hpp:107] Shutdown failed on fd=6:
Transport endpoint is not connected [107]

E0728 11:22:53.210146 10503 socket.hpp:107] Shutdown failed on fd=6:
Transport endpoint is not connected [107]

I have the following configs:


   - zookeeper configured to the mesos master
   - /etc/mesos/zk on all nodes pointing to mesos master ip.

I am not sure if I have to set the ip flag and where I have to set the --ip
flag?

-- 
Regards,
Haripriya Ayyalasomayajula


Re: Custom executor

2015-07-28 Thread Tim Chen
Can you explain what your motivations are and what your new custom executor
will do?

Tim

On Tue, Jul 28, 2015 at 5:08 AM, Aaron Carey  wrote:

>  Hi,
>
> Is it possible to build a custom executor which is not associated with a
> particular scheduler framework? I want to be able to write a custom
> executor which is available to multiple schedulers (eg Marathon, Chronos
> and our own custom scheduler). Is this possible? I couldn't quite figure
> out the best way to go about this from the docs? Is it possible to mix and
> match languages for schedulers and executors? (ie one is python one is C++)
>
> Thanks,
> Aaron
>


Re: Can't get SRV records from Mesos-DNS

2015-07-28 Thread Itamar Ostricher
ah, thanks! missed that part... it did the trick :-)

On Tue, Jul 28, 2015 at 7:25 PM Andras Kerekes <
andras.kere...@ishisystems.com> wrote:

> Hi Itamar,
>
>
>
> You need to change the dns name you’re querying for a bit:
>
>
>
> Use *dig _search3d._tcp.marathon.mesos*
>
>
>
> See: https://mesosphere.github.io/mesos-dns/docs/naming.html#srv-records
>
>
>
> Andras
>
>
>
> *From:* Itamar Ostricher [mailto:ita...@yowza3d.com]
> *Sent:* Tuesday, July 28, 2015 12:15 PM
> *To:* user@mesos.apache.org
> *Subject:* Can't get SRV records from Mesos-DNS
>
>
>
> Hi,
>
>
>
> I just set up mesos-dns with my mesos+marathon cluster, and it appears to
> be working fine, but I can't get SRV records.
>
>
>
> mesos-dns executed by running: $ sudo /usr/local/mesos-dns/mesos-dns
> -config /usr/local/mesos-dns/config.json
>
>
>
> verified to be working by running "dig" from another machine after
> updating its /etc/resolv.conf:
>
> $ dig +short search3d.marathon.mesos
>
> 10.240.76.32
>
> 10.240.175.55
>
>
>
> not sure though how to get SRV records for the service port numbers...
>
> $ nslookup -type=SRV search3d.marathon.mesos
>
> Server: 10.240.28.7
>
> Address:  10.240.28.7#53
>
>
>
> *** Can't find search3d.marathon.mesos: No answer
>
>
>
> Here's the config file:
>
> $ cat /usr/local/mesos-dns/config.json
>
> {
>
>   "zk": "zk://10.240.28.7:2181,10.240.168.92:2181,
> 10.240.251.236:2181/mesos",
>
>   "refreshSeconds": 60,
>
>   "ttl": 60,
>
>   "domain": "mesos",
>
>   "port": 53,
>
>   "resolvers": ["169.254.169.254", "10.240.0.1"],
>
>   "timeout": 5,
>
>   "email": "root.mesos-dns.mesos"
>
> }
>
>
>
> running latest version AFAIK:
>
> $ /usr/local/mesos-dns/mesos-dns -version
>
> 0.1.2
>
>
>
> Am I doing something wrong?
>
> Thanks!
>
> - Itamar.
>


Re: Custom executor

2015-07-28 Thread haosdent
Hi, @Araon If you want to develop your custom framework, you could checkout
this document
https://github.com/apache/mesos/blob/master/docs/app-framework-development-guide.md
first.
> I want to be able to write a custom executor which is available to
multiple schedulers (eg Marathon, Chronos and our own custom scheduler). Is
this possible?

If you want to write a executor used in Marathon/Chronos, you need change
their code. I think this is difficult and not suggest.

> Is it possible to mix and match languages for schedulers and executors?
(ie one is python one is C++)

Yes, could use different languages for different components. Just need
implement the interfaces and make sure the executor could run in slaves.


On Tue, Jul 28, 2015 at 8:08 PM, Aaron Carey  wrote:

>  Hi,
>
> Is it possible to build a custom executor which is not associated with a
> particular scheduler framework? I want to be able to write a custom
> executor which is available to multiple schedulers (eg Marathon, Chronos
> and our own custom scheduler). Is this possible? I couldn't quite figure
> out the best way to go about this from the docs? Is it possible to mix and
> match languages for schedulers and executors? (ie one is python one is C++)
>
> Thanks,
> Aaron
>



-- 
Best Regards,
Haosdent Huang


RE: Can't get SRV records from Mesos-DNS

2015-07-28 Thread Andras Kerekes
Hi Itamar,

 

You need to change the dns name you’re querying for a bit:

 

Use dig _search3d._tcp.marathon.mesos

 

See: https://mesosphere.github.io/mesos-dns/docs/naming.html#srv-records

 

Andras

 

From: Itamar Ostricher [mailto:ita...@yowza3d.com] 
Sent: Tuesday, July 28, 2015 12:15 PM
To: user@mesos.apache.org
Subject: Can't get SRV records from Mesos-DNS

 

Hi,

 

I just set up mesos-dns with my mesos+marathon cluster, and it appears to be 
working fine, but I can't get SRV records.

 

mesos-dns executed by running: $ sudo /usr/local/mesos-dns/mesos-dns -config 
/usr/local/mesos-dns/config.json

 

verified to be working by running "dig" from another machine after updating its 
/etc/resolv.conf:

$ dig +short search3d.marathon.mesos

10.240.76.32

10.240.175.55

 

not sure though how to get SRV records for the service port numbers...

$ nslookup -type=SRV search3d.marathon.mesos

Server: 10.240.28.7

Address:  10.240.28.7#53

 

*** Can't find search3d.marathon.mesos: No answer

 

Here's the config file:

$ cat /usr/local/mesos-dns/config.json

{

  "zk": "zk://10.240.28.7:2181,10.240.168.92:2181,10.240.251.236:2181/mesos",

  "refreshSeconds": 60,

  "ttl": 60,

  "domain": "mesos",

  "port": 53,

  "resolvers": ["169.254.169.254", "10.240.0.1"],

  "timeout": 5,

  "email": "root.mesos-dns.mesos"

}

 

running latest version AFAIK:

$ /usr/local/mesos-dns/mesos-dns -version

0.1.2

 

Am I doing something wrong?

Thanks!

- Itamar.



smime.p7s
Description: S/MIME cryptographic signature


Re: Can't get SRV records from Mesos-DNS

2015-07-28 Thread Michael Hausenblas

What does

dig _search3d._tcp.marathon.mesos SRV 

give you?

See also http://mesosphere.github.io/mesos-dns/docs/naming.html


Cheers,
Michael

--
Michael Hausenblas
Ireland, Europe
http://mhausenblas.info/

> On 28 Jul 2015, at 17:14, Itamar Ostricher  wrote:
> 
> Hi,
> 
> I just set up mesos-dns with my mesos+marathon cluster, and it appears to be 
> working fine, but I can't get SRV records.
> 
> mesos-dns executed by running: $ sudo /usr/local/mesos-dns/mesos-dns -config 
> /usr/local/mesos-dns/config.json
> 
> verified to be working by running "dig" from another machine after updating 
> its /etc/resolv.conf:
> $ dig +short search3d.marathon.mesos
> 10.240.76.32
> 10.240.175.55
> 
> not sure though how to get SRV records for the service port numbers...
> $ nslookup -type=SRV search3d.marathon.mesos
> Server:   10.240.28.7
> Address:  10.240.28.7#53
> 
> *** Can't find search3d.marathon.mesos: No answer
> 
> Here's the config file:
> $ cat /usr/local/mesos-dns/config.json
> {
>   "zk": "zk://10.240.28.7:2181,10.240.168.92:2181,10.240.251.236:2181/mesos",
>   "refreshSeconds": 60,
>   "ttl": 60,
>   "domain": "mesos",
>   "port": 53,
>   "resolvers": ["169.254.169.254", "10.240.0.1"],
>   "timeout": 5,
>   "email": "root.mesos-dns.mesos"
> }
> 
> running latest version AFAIK:
> $ /usr/local/mesos-dns/mesos-dns -version
> 0.1.2
> 
> Am I doing something wrong?
> Thanks!
> - Itamar.



Re: Custom executor

2015-07-28 Thread Sargun Dhillon
Yes. You can mix and match languages. In fact, a major Mesos framework does
this - Aurora. It's scheduler is written in Java and its executor is
written in Python. I've experimented myself in writing the scheduler in
Golang and executor in Erlang.

In addition to this, making your executor accessible to multiple frameworks
is as simple as multiple frameworks referring to that executor in TaskInfo
at launch time.

Sent from my iPhone

On Jul 28, 2015, at 05:08, Aaron Carey  wrote:

 Hi,

Is it possible to build a custom executor which is not associated with a
particular scheduler framework? I want to be able to write a custom
executor which is available to multiple schedulers (eg Marathon, Chronos
and our own custom scheduler). Is this possible? I couldn't quite figure
out the best way to go about this from the docs? Is it possible to mix and
match languages for schedulers and executors? (ie one is python one is C++)

Thanks,
Aaron


Can't get SRV records from Mesos-DNS

2015-07-28 Thread Itamar Ostricher
Hi,

I just set up mesos-dns with my mesos+marathon cluster, and it appears to
be working fine, but I can't get SRV records.

mesos-dns executed by running: $ sudo /usr/local/mesos-dns/mesos-dns
-config /usr/local/mesos-dns/config.json

verified to be working by running "dig" from another machine after updating
its /etc/resolv.conf:
$ dig +short search3d.marathon.mesos
10.240.76.32
10.240.175.55

not sure though how to get SRV records for the service port numbers...
$ nslookup -type=SRV search3d.marathon.mesos
Server: 10.240.28.7
Address: 10.240.28.7#53

*** Can't find search3d.marathon.mesos: No answer

Here's the config file:
$ cat /usr/local/mesos-dns/config.json
{
  "zk": "zk://10.240.28.7:2181,10.240.168.92:2181,10.240.251.236:2181/mesos
",
  "refreshSeconds": 60,
  "ttl": 60,
  "domain": "mesos",
  "port": 53,
  "resolvers": ["169.254.169.254", "10.240.0.1"],
  "timeout": 5,
  "email": "root.mesos-dns.mesos"
}

running latest version AFAIK:
$ /usr/local/mesos-dns/mesos-dns -version
0.1.2

Am I doing something wrong?
Thanks!
- Itamar.


Re: Build 0.23 gcc Version

2015-07-28 Thread John Omernik
So, I don't mean to sound like a newbie here, but in running my current
setup which has 4.6.3, (and I tried to run 4.8) how can I get Mesos 0.23 to
compile. Is this something I need to change in certain files? In certain
steps? Is this something that should be a bug in Mesos to handle the
versions? Is this a configuration issue? I'd love to learn more about how
this works, but would love some pointers here, and since my setup is fairly
vanilla, others may also benefit from getting this to work.

John

On Mon, Jul 27, 2015 at 10:56 AM, James Peach  wrote:

>
> > On Jul 24, 2015, at 3:57 PM, Michael Park  wrote:
> >
> > Hi John,
> >
> > I would first suggest trying CC="gcc" CXX="g++" ../configure, and if
> that works, try to find out what which cc and which c++ return and find out
> what they symlink to.
> > I believe autotools uses cc and c++ rather than gcc and g++ by default,
> so I think there's probably something funky going on there.
>
> No, you explicitly tell autoconf to default to G++
>
> mesos.git jpeach$ grep AC_PROG_C configure.ac
> AC_PROG_CXX([g++])
> AC_PROG_CC([gcc])
>
> IMHO the correct invocation is something like:
> AC_PROG_CXX([c++ g++ clang++])
>
> since you should always default to the system default toolchain
>
> J
>
>


Custom executor

2015-07-28 Thread Aaron Carey
Hi,

Is it possible to build a custom executor which is not associated with a 
particular scheduler framework? I want to be able to write a custom executor 
which is available to multiple schedulers (eg Marathon, Chronos and our own 
custom scheduler). Is this possible? I couldn't quite figure out the best way 
to go about this from the docs? Is it possible to mix and match languages for 
schedulers and executors? (ie one is python one is C++)

Thanks,
Aaron


Re: Questions about framework development - (HA and reconciling state)

2015-07-28 Thread Adam Bordelon
> 0. How do i go about the issue of HA at the scheduler level?
One alternative to having to do your own leader election is to use a
meta-framework like Marathon or Aurora to automatically restart your
scheduler. There will be a short downtime during the failover, but as soon
as the new scheduler comes back up it can recover state, reregister, and
reconcile. Then you only ever need one running instance, which is always
the leader.

> 1. How do i deal with restarts and reconciling the tasks?
I strongly recommend you read
http://mesos.apache.org/documentation/latest/reconciliation/

> 3. How does one go about testing frameworks? Any suggestions / pointers.
- Unit tests within your framework code, mocking necessary Mesos
Master/Slave components.
- Health checks on all your tasks, and a `/health` endpoint on your
scheduler, to ease integration testing.

On Sat, Jul 25, 2015 at 12:30 PM, Ankur Chauhan  wrote:

> Hi all,
>
>
> I am working on creating an integration between Apache Flink (
> http://flink.apache.org) and mesos which would be similar to the way the
> current hadoop-mesos integration works using the java mesos client.
> My current idea is that the scheduler will also run a JobManager process
> (similar to the jobTracker) which will start off a bunch of taskManager
> (similar to the TaskTracker) tasks using a custom executor.
>
> I want to get some feedback and information of the following questions I
> have:
>
> 0. How do i go about the issue of HA at the scheduler level?
> I was thinking of using zookeeper based leader election by directly
> maintaining a zookeeper connection myself. Is there a better way to do this
> (something which does not require me to use a self managed zookeeper
> connection)?
>
> 1. How do i deal with restarts and reconciling the tasks?
> In case the scheduler restarts (currently maintains an in-memory map
> of currently running tasks), How do I go about rediscovering tasks and
> reconciling state?
> I was thinking of using DiscoverInfo but I can't find any reference to
> figure out how to "query" mesos for tasks matching the service discovery
> information. - Any suggestions on how to do this.
>
> 3. How does one go about testing frameworks? Any suggestions / pointers.
>
> My work in progress version is at
> https://github.com/ankurcha/flink/tree/flink-mesos/flink-mesos
>
> Any help would be much appreciated.
>
>
> Thanks!
> Ankur
>


Re: Is it possible to run mesos master/slave in private IP and exhibit mesos cluster status with public IP?

2015-07-28 Thread craig w
The mesos-slaves have their own UI separate from the master? If so, what's
the URL to get to it? I just tried http://:5051 and got a
blank page.

On Tue, Jul 28, 2015 at 6:51 AM, Adam Bordelon  wrote:

> A simple nginx reverse proxy will get you most of the way there, but only
> for the master webui. Since the tasks' sandboxes are hosted on each slave's
> webui, you would also have to reverse proxy each slave's webui in order for
> sandboxes to be publicly accessible. More complicated, but not impossible.
>
> Sounds like what you really want is:
> https://issues.apache.org/jira/browse/MESOS-2102
>
> On Mon, Jul 27, 2015 at 5:55 AM, haosdent  wrote:
>
>> Does nginx could satisfy your requirement? You requests to nginx, and
>> nginx reverse proxy the mesos master.
>>
>> On Mon, Jul 27, 2015 at 8:50 PM, sujz <43183...@qq.com> wrote:
>>
>>> Hi, all,
>>>
>>> I want to run mesos master/slave/framework in private IP, and exhibit
>>> mesos cluster status with a new public IP, this way can enhance safe and
>>> privacy protection of enterprise internal datacenter network.
>>>
>>> After reading through the code, unfortunately, I didn't find where
>>> mesos-master binds listening ip and port to http server(please give me some
>>> hints), so I want to know how to implement communicating with slaves and
>>> frameworks with private IP and building http server with public IP for
>>> serving external http requests?
>>>
>>> Any suggestions will be appreciated.
>>> Best regards!
>>
>>
>>
>>
>> --
>> Best Regards,
>> Haosdent Huang
>>
>
>


-- 

https://github.com/mindscratch
https://www.google.com/+CraigWickesser
https://twitter.com/mind_scratch
https://twitter.com/craig_links


Re: Is it possible to run mesos master/slave in private IP and exhibit mesos cluster status with public IP?

2015-07-28 Thread Adam Bordelon
A simple nginx reverse proxy will get you most of the way there, but only
for the master webui. Since the tasks' sandboxes are hosted on each slave's
webui, you would also have to reverse proxy each slave's webui in order for
sandboxes to be publicly accessible. More complicated, but not impossible.

Sounds like what you really want is:
https://issues.apache.org/jira/browse/MESOS-2102

On Mon, Jul 27, 2015 at 5:55 AM, haosdent  wrote:

> Does nginx could satisfy your requirement? You requests to nginx, and
> nginx reverse proxy the mesos master.
>
> On Mon, Jul 27, 2015 at 8:50 PM, sujz <43183...@qq.com> wrote:
>
>> Hi, all,
>>
>> I want to run mesos master/slave/framework in private IP, and exhibit
>> mesos cluster status with a new public IP, this way can enhance safe and
>> privacy protection of enterprise internal datacenter network.
>>
>> After reading through the code, unfortunately, I didn't find where
>> mesos-master binds listening ip and port to http server(please give me some
>> hints), so I want to know how to implement communicating with slaves and
>> frameworks with private IP and building http server with public IP for
>> serving external http requests?
>>
>> Any suggestions will be appreciated.
>> Best regards!
>
>
>
>
> --
> Best Regards,
> Haosdent Huang
>