Re: Multiple disks with Mesos

2014-10-08 Thread Damien Hardy
Hello,

I run mesos on top hadoop HDFS.
Hadoop handle well with JBOD configuration.

Today mesos can only work on one of the disk and cannot take advantage
of other disks. (use non HDFS space)

This would be a great feature to handle with JBOD too. Dealing with
failure better than LVM for example.

Cheers,

Le 08/10/2014 01:06, Arunabha Ghosh a écrit :
 Hi,
  I would like to run Mesos slaves on machines that have multiple
 disks. According to the Mesos configuration page
 http://mesos.apache.org/documentation/latest/configuration/ I can
 specify a work_dir argument to the slaves. 
 
 1) Can the work_dir argument contain multiple directories ?
 
 2) Is the work_dir where Mesos will place all of its data ? So If I
 started a task on Mesos, would the slave place the task's data (stderr,
 stdout, task created directories) inside work_dir ?
 
 Thanks,
 Arunabha

-- 
Damien HARDY



signature.asc
Description: OpenPGP digital signature


How can libmesos bind and declare specific network interface

2014-07-01 Thread Damien Hardy
Hello,

We would like to use spark on mesos but mesos cluster is accessible via VPN.
When running spark-shell we can see registrations attemps rununing with
defaut public interface of the desktop :

```
I0701 12:07:34.710917  2440 master.cpp:820] Framework
20140612-135938-16790026-5050-2407-0537
(scheduler(1)@192.168.2.92:42731) already registered, resending
acknowledgement
I0701 12:07:35.711632  2430 master.cpp:815] Received registration
request from scheduler(1)@192.168.2.92:42731
```

But we would like it register with the VPN interface.

This is working when changing my /etc/hosts file and setting hostname on
my VPN address:
```
I0701 12:03:54.193022  2441 master.cpp:815] Received registration
request from scheduler(1)@10.69.69.45:47440
I0701 12:03:54.193094  2441 master.cpp:833] Registering framework
20140612-135938-16790026-5050-2407-0536 at scheduler(1)@10.69.69.45:47440
```

I tried spark with
```
spark.driver.host   10.69.69.45
```
I can see spark binding to the rigth interfa ce but mesos keep
registring with default one. (and fail)

I hope envvar $MESOS_hostname would do the trick but without success...

Thank for help.

-- 
Damien HARDY
IT Infrastructure Architect
Viadeo - 30 rue de la Victoire - 75009 Paris - France
PGP : 45D7F89A



signature.asc
Description: OpenPGP digital signature


Re: How can libmesos bind and declare specific network interface

2014-07-01 Thread Damien Hardy
Hi,
I am not talking about mesos-master or mesos-slave but about the spark
driver (using libmesos as a framework).
As it declares itself diring mesos registration comming from default
interface of the desktop instead of the VPN one.

So mesos-master tries to access to an interface it cannot reach.

Spark using spark.driver.host   10.69.69.45
see netstat :
tcp0  0 0.0.0.0:44424   0.0.0.0:*
LISTEN  1000   3076384 6779/java
tcp0  0 10.69.69.45:39698   10.50.0.1:5050
ESTABLISHED 1000   3068664 6779/java
tcp6   0  0 :::43430:::*
LISTEN  1000   3077940 6779/java
tcp6   0  0 :::37926:::*
LISTEN  1000   3077939 6779/java
tcp6   0  0 :::4040 :::*
LISTEN  1000   3077942 6779/java
tcp6   0  0 :::51154:::*
LISTEN  1000   3077938 6779/java
tcp6   0  0 10.69.69.45:34610   :::*
LISTEN  1000   3076383 6779/java
tcp6   0  0 :::43122:::*
LISTEN  1000   3077884 6779/java

We can see spark interface are well binded to 10.69.69.45 but problem
still exist for port 44424 that is supposed to be reached by
mesos-master during registration.

I would like to make libmesos, used by framework, bind to the rigth
interface.

Le 01/07/2014 13:34, Tomas Barton a écrit :
 Hi, 
 
 have you tried setting '--ip 10.69.69.45' ?
 
 So, mesos-master is binded to a wrong interface? Or you have problem
 with mesos-slaves?
 
 Tomas
 
 
 On 1 July 2014 12:16, Damien Hardy dha...@viadeoteam.com
 mailto:dha...@viadeoteam.com wrote:
 
 Hello,
 
 We would like to use spark on mesos but mesos cluster is accessible
 via VPN.
 When running spark-shell we can see registrations attemps rununing with
 defaut public interface of the desktop :
 
 ```
 I0701 12:07:34.710917 2440 tel:710917%20%202440 master.cpp:820]
 Framework
 20140612-135938-16790026-5050-2407-0537
 (scheduler(1)@192.168.2.92:42731 http://192.168.2.92:42731)
 already registered, resending
 acknowledgement
 I0701 12:07:35.711632  2430 master.cpp:815] Received registration
 request from scheduler(1)@192.168.2.92:42731 http://192.168.2.92:42731
 ```
 
 But we would like it register with the VPN interface.
 
 This is working when changing my /etc/hosts file and setting hostname on
 my VPN address:
 ```
 I0701 12:03:54.193022  2441 master.cpp:815] Received registration
 request from scheduler(1)@10.69.69.45:47440 http://10.69.69.45:47440
 I0701 12:03:54.193094  2441 master.cpp:833] Registering framework
 20140612-135938-16790026-5050-2407-0536 at
 scheduler(1)@10.69.69.45:47440 http://10.69.69.45:47440
 ```
 
 I tried spark with
 ```
 spark.driver.host   10.69.69.45
 ```
 I can see spark binding to the rigth interfa ce but mesos keep
 registring with default one. (and fail)
 
 I hope envvar $MESOS_hostname would do the trick but without success...
 
 Thank for help.
 
 --
 Damien HARDY
 IT Infrastructure Architect
 Viadeo - 30 rue de la Victoire - 75009 Paris - France
 PGP : 45D7F89A
 
 

-- 
Damien HARDY
IT Infrastructure Architect
Viadeo - 30 rue de la Victoire - 75009 Paris - France
PGP : 45D7F89A



signature.asc
Description: OpenPGP digital signature


Re: How can libmesos bind and declare specific network interface

2014-07-01 Thread Damien Hardy
Good one \o/

many thanks.

Le 01/07/2014 14:52, Tomas Barton a écrit :
 what about?
 
 export LIBPROCESS_IP=10.69.69.45
 
 or add rule to iptables for port range translation 3:6 to that
 interface
 
 
 On 1 July 2014 14:30, Damien Hardy dha...@viadeoteam.com
 mailto:dha...@viadeoteam.com wrote:
 
 Hi,
 I am not talking about mesos-master or mesos-slave but about the spark
 driver (using libmesos as a framework).
 As it declares itself diring mesos registration comming from default
 interface of the desktop instead of the VPN one.
 
 So mesos-master tries to access to an interface it cannot reach.
 
 Spark using spark.driver.host   10.69.69.45
 see netstat :
 tcp0  0 0.0.0.0:44424 http://0.0.0.0:44424  
 0.0.0.0:*
 LISTEN  1000   3076384 6779/java
 tcp0  0 10.69.69.45:39698 http://10.69.69.45:39698
   10.50.0.1:5050 http://10.50.0.1:5050
 ESTABLISHED 1000   3068664 6779/java
 tcp6   0  0 :::43430:::*
 LISTEN  1000   3077940 6779/java
 tcp6   0  0 :::37926:::*
 LISTEN  1000   3077939 6779/java
 tcp6   0  0 :::4040 :::*
 LISTEN  1000   3077942 6779/java
 tcp6   0  0 :::51154:::*
 LISTEN  1000   3077938 6779/java
 tcp6   0  0 10.69.69.45:34610 http://10.69.69.45:34610
   :::*
 LISTEN  1000   3076383 6779/java
 tcp6   0  0 :::43122:::*
 LISTEN  1000   3077884 6779/java
 
 We can see spark interface are well binded to 10.69.69.45 but problem
 still exist for port 44424 that is supposed to be reached by
 mesos-master during registration.
 
 I would like to make libmesos, used by framework, bind to the rigth
 interface.
 
 Le 01/07/2014 13:34, Tomas Barton a écrit :
  Hi,
 
  have you tried setting '--ip 10.69.69.45' ?
 
  So, mesos-master is binded to a wrong interface? Or you have problem
  with mesos-slaves?
 
  Tomas
 
 
  On 1 July 2014 12:16, Damien Hardy dha...@viadeoteam.com
 mailto:dha...@viadeoteam.com
  mailto:dha...@viadeoteam.com mailto:dha...@viadeoteam.com wrote:
 
  Hello,
 
  We would like to use spark on mesos but mesos cluster is
 accessible
  via VPN.
  When running spark-shell we can see registrations attemps
 rununing with
  defaut public interface of the desktop :
 
  ```
  I0701 12:07:34.710917 2440 tel:710917%202440
 tel:710917%20%202440 master.cpp:820]
  Framework
  20140612-135938-16790026-5050-2407-0537
  (scheduler(1)@192.168.2.92:42731 http://192.168.2.92:42731
 http://192.168.2.92:42731)
  already registered, resending
  acknowledgement
  I0701 12:07:35.711632  2430 master.cpp:815] Received registration
  request from scheduler(1)@192.168.2.92:42731
 http://192.168.2.92:42731 http://192.168.2.92:42731
  ```
 
  But we would like it register with the VPN interface.
 
  This is working when changing my /etc/hosts file and setting
 hostname on
  my VPN address:
  ```
  I0701 12:03:54.193022  2441 master.cpp:815] Received registration
  request from scheduler(1)@10.69.69.45:47440
 http://10.69.69.45:47440 http://10.69.69.45:47440
  I0701 12:03:54.193094  2441 master.cpp:833] Registering framework
  20140612-135938-16790026-5050-2407-0536 at
  scheduler(1)@10.69.69.45:47440 http://10.69.69.45:47440
 http://10.69.69.45:47440
  ```
 
  I tried spark with
  ```
  spark.driver.host   10.69.69.45
  ```
  I can see spark binding to the rigth interfa ce but mesos keep
  registring with default one. (and fail)
 
  I hope envvar $MESOS_hostname would do the trick but without
 success...
 
  Thank for help.
 
  --
  Damien HARDY
  IT Infrastructure Architect
  Viadeo - 30 rue de la Victoire - 75009 Paris - France
  PGP : 45D7F89A
 
 
 
 --
 Damien HARDY
 IT Infrastructure Architect
 Viadeo - 30 rue de la Victoire - 75009 Paris - France
 PGP : 45D7F89A
 
 

-- 
Damien HARDY
IT Infrastructure Architect
Viadeo - 30 rue de la Victoire - 75009 Paris - France
PGP : 45D7F89A



signature.asc
Description: OpenPGP digital signature


Re: Log managment

2014-05-30 Thread Damien Hardy
Hello,

Yes I do.
I thought this was re right thing to do for logs.
But never ending file is not safe usable. This option --log_dir need
some rework I suppose.
I will go with stdout/stderr pipeline instead (using logrotate
copytruncate to handle open file descriptors)

Thank you

Le 15/05/2014 22:02, Tomas Barton a écrit :
 Hi Damien,
 
 do you use the `--log_dir` switch? If so, mesos is creating quite many
 files in a strange format:
 
 mesos-slave.{hostname}.invalid-user.log.INFO.20140409-155625.7545
 
 when you forward stdout of the service to a single file and afterwards
 apply simple logroate
 rules, you might get nicer logs.
 
 Tomas

-- 
Damien HARDY



signature.asc
Description: OpenPGP digital signature


Re: Log managment

2014-05-30 Thread Damien Hardy
Hello,
Your point is right.
however --log_dir option provide a way to display logs for uses in HTTP
UI. That is not permit by tailing stdout/stderr.

Best regards,

Le 15/05/2014 17:42, Dick Davies a écrit :
 I'd try a newer version before you file bugs - but to be honest log rotation
 is logrotates job, it's really not very hard to setup.
 
 In our stack we run under upstart, so things make it into syslog and we
 don't have to worry about rotation - scales better too as it's easier to
 centralize.
 
 On 14 May 2014 09:46, Damien Hardy dha...@viadeoteam.com wrote:
 Hello,

 Log in mesos are problematic for me so far.
 We are used to use log4j facility in java world that permit a lot of things.

 Mainly I would like log rotation (ideally with logrotate tool to be
 homogeneous with other things) without restarting processes because in
 my experience it looses history ( mesos 0.16.0 so far )

 Best regards,

 --
 Damien HARDY

-- 
Damien HARDY



signature.asc
Description: OpenPGP digital signature


Re: Log managment

2014-05-16 Thread Damien Hardy
Hello,

I created https://issues.apache.org/jira/browse/MESOS-1375

Thank you,

Cheers,

Le 14/05/2014 19:28, Adam Bordelon a écrit :
 Hi Damien,
 
 Log rotation sounds like a reasonable request. Please file a JIRA for
 it, and we can discuss details there.
 
 Thanks,
 -Adam-

-- 
Damien HARDY



signature.asc
Description: OpenPGP digital signature


Re: run master and slave on same host (just for testing)?

2014-01-15 Thread Damien Hardy
Hello Jim,

Actually configuration are mostly different between master and slave.
Only few of them are common and mostly addressing network or log
purposes that would be common on a single host.

BTW You can use envar for settings,
any config can be set using $MESOS_ prefix

ex :
export MESOS_ip=192.168.255.2
./bin/mesos-slave


For my need of debian packaging, I put init scripts that can inspire you
to run services
they use respectively /etc/default/mesos /etc/default/mesos-master
/etc/default/mesos-slave

https://github.com/viadeo/mesos/blob/feature/Debian/debian/mesos-master.init
https://github.com/viadeo/mesos/blob/feature/Debian/debian/mesos-slave.init

Cheers,

Le 15/01/2014 02:18, Jim Freeman a écrit :
 I'm following Mesos README's Running a Mesos Cluster section.  Is
  [prefix]/var/mesos/conf/mesos.conf used to config both the master and
 slave?  If so then I can't run master and slave on same host since the
 config would differ for master vs. slave.  BTW, I don't see this file
 installed, nor do I see a .template for it anywhere.
 
 - Jim

-- 
Damien HARDY



signature.asc
Description: OpenPGP digital signature


Re: Hadoop on Mesos use local cdh4 installation instead of tar.gz

2014-01-02 Thread Damien Hardy
Hello,

Using hadoop distribution is possible (here cdh4.1.2) :
An archive is mandatory by haddop-mesos framework, so I created and
deployed a small dummy file that does not cost so much to get and untar.

In mapred-site.xml, override mapred.mesos.executor.directory and
mapred.mesos.executor.command so I use mesos task directory for my job
and deployed cloudera tasktracker to execute.

+  property
+namemapred.mesos.executor.uri/name
+valuehdfs://hdfscluster/tmp/dummy.tar.gz/value
+  /property
+  property
+namemapred.mesos.executor.directory/name
+value.//value
+  /property
+  property
+namemapred.mesos.executor.command/name
+value. /etc/default/hadoop-0.20; env ; $HADOOP_HOME/bin/hadoop
org.apache.hadoop.mapred.MesosExecutor/value
+  /property

Add some envar in /etc/default/hadoop-0.20 so hadoop services can find
hadoop-mesos jar and libmesos :

+export
HADOOP_CLASSPATH=/usr/lib/hadoop-mesos/hadoop-mesos.jar:$HADOOP_HOME/contrib/fairscheduler/hadoop-fairscheduler-2.0.0-mr1-cdh4.1.2.jar:$HADOOP_CLASSPATH
+export MESOS_NATIVE_LIBRARY=/usr/lib/libmesos.so

I created an hadoop-mesos deb to be deployed with hadoop ditribution.
My goal is to limit -copyToLocal of TT code for each mesos tasks, and no
need for special manipulation in Hadoop Distribution code (only config)

Regards,

Le 31/12/2013 16:45, Damien Hardy a écrit :
 I'm now able to use snappy compression by adding
 
 export JAVA_LIBRARY_PATH=/usr/lib/hadoop/lib/native/
 in my /etc/default/mesos-slave (environment variable for mesos-slave
 process used by my init.d script)
 
 This envar is propagated to executor Jvm and so taskTracker can find
 libsnappy.so to use it.
 
 Starting using local deployement of cdh4 ...
 
 Reading at the source it seams that something could be done using
 mapred.mesos.executor.directory and mapred.mesos.executor.command
 to use local hadoop.
 
 
 Le 31/12/2013 15:08, Damien Hardy a écrit :
 Hello,

 Happy new year 2014 @mesos users.

 I am trying to get MapReduce cdh4.1.2 running on Mesos.

 Seams working mostly but few things are still problematic.

   * MR1 code is already deployed locally with HDFS is there a way to use
 it instead of tar.gz stored on HDFS to be copied locally and untar.

   * If not, using tar.gz distribution of cdh4 seams not supporting
 Snappy compression. is there a way to correct it ?

 Best regards,

 

-- 
Damien HARDY
IT Infrastructure Architect
Viadeo - 30 rue de la Victoire - 75009 Paris - France
PGP : 45D7F89A



signature.asc
Description: OpenPGP digital signature


Hadoop on Mesos use local cdh4 installation instead of tar.gz

2013-12-31 Thread Damien Hardy
Hello,

Happy new year 2014 @mesos users.

I am trying to get MapReduce cdh4.1.2 running on Mesos.

Seams working mostly but few things are still problematic.

  * MR1 code is already deployed locally with HDFS is there a way to use
it instead of tar.gz stored on HDFS to be copied locally and untar.

  * If not, using tar.gz distribution of cdh4 seams not supporting
Snappy compression. is there a way to correct it ?

Best regards,

-- 
Damien HARDY



signature.asc
Description: OpenPGP digital signature


Re: Hadoop on Mesos use local cdh4 installation instead of tar.gz

2013-12-31 Thread Damien Hardy
I'm now able to use snappy compression by adding

export JAVA_LIBRARY_PATH=/usr/lib/hadoop/lib/native/
in my /etc/default/mesos-slave (environment variable for mesos-slave
process used by my init.d script)

This envar is propagated to executor Jvm and so taskTracker can find
libsnappy.so to use it.

Starting using local deployement of cdh4 ...

Reading at the source it seams that something could be done using
mapred.mesos.executor.directory and mapred.mesos.executor.command
to use local hadoop.


Le 31/12/2013 15:08, Damien Hardy a écrit :
 Hello,
 
 Happy new year 2014 @mesos users.
 
 I am trying to get MapReduce cdh4.1.2 running on Mesos.
 
 Seams working mostly but few things are still problematic.
 
   * MR1 code is already deployed locally with HDFS is there a way to use
 it instead of tar.gz stored on HDFS to be copied locally and untar.
 
   * If not, using tar.gz distribution of cdh4 seams not supporting
 Snappy compression. is there a way to correct it ?
 
 Best regards,
 

-- 
Damien HARDY



signature.asc
Description: OpenPGP digital signature


Re: is mesos-submit broken on HEAD (0.15) ?

2013-09-23 Thread Damien Hardy
Thank you Benjamin,

I get 502 errors for now on https://reviews.apache.org /o\


2013/9/20 Benjamin Mahler benjamin.mah...@gmail.com

 mesos-submit is indeed broken and in need of some love, David Greenberg
 has a review to fix it:
 https://reviews.apache.org/r/13367/


 On Fri, Sep 20, 2013 at 8:06 AM, Damien Hardy dha...@viadeoteam.comwrote:

 Hello,

 mesos-submit seams broken (or maybe I missed something)

 I want to execute some helloworld on my deployed mesos cluster.

 ```
 vagrant@master01:~/mesos$ ./frameworks/mesos-submit/mesos_submit.py zk://
 192.168.255.2:2181/mesos 'echo plop'
 Connecting to mesos master zk://192.168.255.2:2181/mesos
 Traceback (most recent call last):
   File ./frameworks/mesos-submit/mesos_submit.py, line 102, in module
 mesos.MesosSchedulerDriver(sched, master).run()
 TypeError: function takes exactly 3 arguments (2 given)
 ```

 test-frameworks suppose that the whole build directory is deployed on
 every nodes (at the same place).
 And running it complains about test-executor file not found because I
 want to deploy nodes using debian package of slave service and dependencies
 (without tests files).

 --
 Damien





-- 
Damien HARDY
IT Infrastructure Architect
Viadeo - 30 rue de la Victoire - 75009 Paris - France