Hello,

Using hadoop distribution is possible (here cdh4.1.2) :
An archive is mandatory by haddop-mesos framework, so I created and
deployed a small dummy file that does not cost so much to get and untar.

In mapred-site.xml, override mapred.mesos.executor.directory and
mapred.mesos.executor.command so I use mesos task directory for my job
and deployed cloudera tasktracker to execute.

+  <property>
+    <name>mapred.mesos.executor.uri</name>
+    <value>hdfs://hdfscluster/tmp/dummy.tar.gz</value>
+  </property>
+  <property>
+    <name>mapred.mesos.executor.directory</name>
+    <value>./</value>
+  </property>
+  <property>
+    <name>mapred.mesos.executor.command</name>
+    <value>. /etc/default/hadoop-0.20; env ; $HADOOP_HOME/bin/hadoop
org.apache.hadoop.mapred.MesosExecutor</value>
+  </property>

Add some envar in /etc/default/hadoop-0.20 so hadoop services can find
hadoop-mesos jar and libmesos :

+export
HADOOP_CLASSPATH=/usr/lib/hadoop-mesos/hadoop-mesos.jar:$HADOOP_HOME/contrib/fairscheduler/hadoop-fairscheduler-2.0.0-mr1-cdh4.1.2.jar:$HADOOP_CLASSPATH
+export MESOS_NATIVE_LIBRARY=/usr/lib/libmesos.so

I created an hadoop-mesos deb to be deployed with hadoop ditribution.
My goal is to limit -copyToLocal of TT code for each mesos tasks, and no
need for special manipulation in Hadoop Distribution code (only config)

Regards,

Le 31/12/2013 16:45, Damien Hardy a écrit :
> I'm now able to use snappy compression by adding
> 
> export JAVA_LIBRARY_PATH=/usr/lib/hadoop/lib/native/
> in my /etc/default/mesos-slave (environment variable for mesos-slave
> process used by my init.d script)
> 
> This envar is propagated to executor Jvm and so taskTracker can find
> libsnappy.so to use it.
> 
> Starting using local deployement of cdh4 ...
> 
> Reading at the source it seams that something could be done using
> mapred.mesos.executor.directory and mapred.mesos.executor.command
> to use local hadoop.
> 
> 
> Le 31/12/2013 15:08, Damien Hardy a écrit :
>> Hello,
>>
>> Happy new year 2014 @mesos users.
>>
>> I am trying to get MapReduce cdh4.1.2 running on Mesos.
>>
>> Seams working mostly but few things are still problematic.
>>
>>   * MR1 code is already deployed locally with HDFS is there a way to use
>> it instead of tar.gz stored on HDFS to be copied locally and untar.
>>
>>   * If not, using tar.gz distribution of cdh4 seams not supporting
>> Snappy compression. is there a way to correct it ?
>>
>> Best regards,
>>
> 

-- 
Damien HARDY
IT Infrastructure Architect
Viadeo - 30 rue de la Victoire - 75009 Paris - France
PGP : 45D7F89A

Attachment: signature.asc
Description: OpenPGP digital signature

Reply via email to