Thanks Sanjay, it seems i found the root cause.
But I result in following error:
[hadoop@ws37-mah-lin hadoop-0.20.2]$ bin/hadoop jar wordcount1.jar
org.myorg.WordCount /user/hadoop/gutenberg /user/hadoop/output1
Exception in specified URI's java.net.URISyntaxException: Illegal
character in path at index 36: hdfs://192.168.0.131:54310/jcuda.jar
at java.net.URI$Parser.fail(URI.java:2809)
at java.net.URI$Parser.checkChars(URI.java:2982)
at java.net.URI$Parser.parseHierarchical(URI.java:3066)
at java.net.URI$Parser.parse(URI.java:3014)
at java.net.URI.<init>(URI.java:578)
at
org.apache.hadoop.util.StringUtils.stringToURI(StringUtils.java:204)
at
org.apache.hadoop.filecache.DistributedCache.getCacheFiles(DistributedCache.java:593)
at
org.apache.hadoop.mapred.JobClient.configureCommandLineOptions(JobClient.java:638)
at
org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:761)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:432)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:447)
at org.myorg.WordCount.main(WordCount.java:59)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
Exception in thread "main" java.lang.NullPointerException
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:176)
at
org.apache.hadoop.filecache.DistributedCache.getTimestamp(DistributedCache.java:506)
at
org.apache.hadoop.mapred.JobClient.configureCommandLineOptions(JobClient.java:640)
at
org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:761)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:432)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:447)
at org.myorg.WordCount.main(WordCount.java:59)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
Please check my attached mapred-site.xml
Thanks & best regards,
Adarsh Sharma
Kaluskar, Sanjay wrote:
You will probably have to use distcache to distribute your jar to all
the nodes too. Read the distcache documentation; Then on each node you
can add the new jar to the java.library.path through
mapred.child.java.opts.
You need to do something like the following in mapred-site.xml, where
fs-uri is the URI of the file system (something like
host.mycompany.com:54310).
<property>
<name>mapred.cache.files</name>
<value>hdfs://fs-uri/jcuda/jcuda.jar#jcuda.jar </value>
</property>
<property>
<name>mapred.create.symlink</name>
<value>yes</value>
</property>
<property>
<name>mapred.child.java.opts</name>
<value>-Djava.library.path=jcuda.jar</value>
</property>
-----Original Message-----
From: Adarsh Sharma [mailto:[email protected]]
Sent: 28 February 2011 16:03
To: [email protected]
Subject: Setting java.library.path for map-reduce job
Dear all,
I want to set some extra jars in java.library.path , used while running
map-reduce program in Hadoop Cluster.
I got a exception entitled "no jcuda in java.library.path" in each map
task.
I run my map-reduce code by below commands :
javac -classpath
/home/hadoop/project/hadoop-0.20.2/hadoop-0.20.2-core.jar://home/hadoop/
project/hadoop-0.20.2/jcuda_1.1_linux64/jcuda.jar:/home/hadoop/project/h
adoop-0.20.2/lib/commons-cli-1.2.jar
-d wordcount_classes1/ WordCount.java
jar -cvf wordcount1.jar -C wordcount_classes1/ .
bin/hadoop jar wordcount1.jar org.myorg.WordCount /user/hadoop/gutenberg
/user/hadoop/output1
Please guide how to achieve this.
Thanks & best Regards,
Adarsh Sharma
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>mapred.job.tracker</name>
<value>192.168.0.131:54311</value>
<description>The host and port that the MapReduce job tracker runs
at. If "local", then jobs are run in-process as a single map
and reduce task.
</description>
</property>
<property>
<name>mapred.local.dir</name>
<value>/hdd-1/mapred/local</value>
<description>The local directory where MapReduce stores intermediate
data files. May be a comma-separated list of directories on different devices in order to spread disk i/o.
Directories that do not exist are ignored.
</description>
</property>
<property>
<name>mapred.system.dir</name>
<value>/home/mapred/system</value>
<description>The shared directory where MapReduce stores control files.
</description>
</property>
<property>
<name>mapred.cache.files</name>
<value>hdfs://192.168.0.131:54310/jcuda.jar </value>
</property>
<property>
<name>mapred.create.symlink</name>
<value>yes</value>
</property>
<property>
<name>mapred.child.java.opts</name>
<value>-Djava.library.path=/home/hadoop/project/hadoop-0.20.2/0.20.2/jcuda_1.1_linux64/jcuda.jar</value>
</property>
<property>
<name>mapreduce.map.java.opts</name>
<value>
-Xmx256M -Djava.library.path=/home/hadoop/project/hadoop-0.20.2/0.20.2/jcuda_1.1_linux64 </value>
</property>
<property>
<name>mapreduce.reduce.java.opts</name>
<value>
-Xmx512M -Djava.library.path=/home/hadoop/project/hadoop-0.20.2/0.20.2/jcuda_1.1_linux64
</value>
</property>
</configuration>