You will probably have to use distcache to distribute your jar to all
the nodes too. Read the distcache documentation; Then on each node you
can add the new jar to the java.library.path through
mapred.child.java.opts.

You need to do something like the following in mapred-site.xml, where
fs-uri is the URI of the file system (something like
host.mycompany.com:54310).

<property>
  <name>mapred.cache.files</name>
  <value>hdfs://fs-uri/jcuda/jcuda.jar#jcuda.jar </value>
</property>
<property>
  <name>mapred.create.symlink</name>
  <value>yes</value>
</property>
<property>
  <name>mapred.child.java.opts</name>
  <value>-Djava.library.path=jcuda.jar</value>
</property>


-----Original Message-----
From: Adarsh Sharma [mailto:[email protected]] 
Sent: 28 February 2011 16:03
To: [email protected]
Subject: Setting java.library.path for map-reduce job

Dear all,

I want to set some extra jars in java.library.path , used while running
map-reduce program in Hadoop Cluster.

I got a exception entitled "no jcuda in java.library.path" in each map
task.

I run my map-reduce code by below commands :

javac -classpath
/home/hadoop/project/hadoop-0.20.2/hadoop-0.20.2-core.jar://home/hadoop/
project/hadoop-0.20.2/jcuda_1.1_linux64/jcuda.jar:/home/hadoop/project/h
adoop-0.20.2/lib/commons-cli-1.2.jar
-d wordcount_classes1/ WordCount.java

jar -cvf wordcount1.jar -C wordcount_classes1/ .

bin/hadoop jar wordcount1.jar org.myorg.WordCount /user/hadoop/gutenberg
/user/hadoop/output1


Please guide how to achieve this.



Thanks & best Regards,

Adarsh Sharma

Reply via email to