Hi Adarsh,

I think your mapred.cache.files property has an extra space at the end. Try
removing that and let us know how it goes.
Thanks and Regards,
Sonal
<https://github.com/sonalgoyal/hiho>Hadoop ETL and Data
Integration<https://github.com/sonalgoyal/hiho>
Nube Technologies <http://www.nubetech.co>

<http://in.linkedin.com/in/sonalgoyal>





On Mon, Feb 28, 2011 at 5:06 PM, Adarsh Sharma <[email protected]>wrote:

> Thanks Sanjay, it seems i found the root cause.
>
> But I result in following error:
>
> [hadoop@ws37-mah-lin hadoop-0.20.2]$ bin/hadoop jar wordcount1.jar
> org.myorg.WordCount /user/hadoop/gutenberg /user/hadoop/output1
> Exception in specified URI's java.net.URISyntaxException: Illegal character
> in path at index 36: hdfs://192.168.0.131:54310/jcuda.jar
>       at java.net.URI$Parser.fail(URI.java:2809)
>       at java.net.URI$Parser.checkChars(URI.java:2982)
>       at java.net.URI$Parser.parseHierarchical(URI.java:3066)
>       at java.net.URI$Parser.parse(URI.java:3014)
>       at java.net.URI.<init>(URI.java:578)
>       at
> org.apache.hadoop.util.StringUtils.stringToURI(StringUtils.java:204)
>       at
> org.apache.hadoop.filecache.DistributedCache.getCacheFiles(DistributedCache.java:593)
>       at
> org.apache.hadoop.mapred.JobClient.configureCommandLineOptions(JobClient.java:638)
>       at
> org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:761)
>       at org.apache.hadoop.mapreduce.Job.submit(Job.java:432)
>       at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:447)
>       at org.myorg.WordCount.main(WordCount.java:59)
>       at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>       at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>       at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>       at java.lang.reflect.Method.invoke(Method.java:597)
>       at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
>
> Exception in thread "main" java.lang.NullPointerException
>       at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:176)
>       at
> org.apache.hadoop.filecache.DistributedCache.getTimestamp(DistributedCache.java:506)
>       at
> org.apache.hadoop.mapred.JobClient.configureCommandLineOptions(JobClient.java:640)
>       at
> org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:761)
>       at org.apache.hadoop.mapreduce.Job.submit(Job.java:432)
>       at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:447)
>       at org.myorg.WordCount.main(WordCount.java:59)
>       at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>       at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>       at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>       at java.lang.reflect.Method.invoke(Method.java:597)
>       at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
>
> Please check my attached mapred-site.xml
>
>
> Thanks & best regards,
>
> Adarsh Sharma
>
>
>
> Kaluskar, Sanjay wrote:
>
>> You will probably have to use distcache to distribute your jar to all
>> the nodes too. Read the distcache documentation; Then on each node you
>> can add the new jar to the java.library.path through
>> mapred.child.java.opts.
>>
>> You need to do something like the following in mapred-site.xml, where
>> fs-uri is the URI of the file system (something like
>> host.mycompany.com:54310).
>>
>> <property>
>>  <name>mapred.cache.files</name>
>>  <value>hdfs://fs-uri/jcuda/jcuda.jar#jcuda.jar </value>
>> </property>
>> <property>
>>  <name>mapred.create.symlink</name>
>>  <value>yes</value>
>> </property>
>> <property>
>>  <name>mapred.child.java.opts</name>
>>  <value>-Djava.library.path=jcuda.jar</value>
>> </property>
>>
>>
>> -----Original Message-----
>> From: Adarsh Sharma [mailto:[email protected]] Sent: 28 February
>> 2011 16:03
>> To: [email protected]
>> Subject: Setting java.library.path for map-reduce job
>>
>> Dear all,
>>
>> I want to set some extra jars in java.library.path , used while running
>> map-reduce program in Hadoop Cluster.
>>
>> I got a exception entitled "no jcuda in java.library.path" in each map
>> task.
>>
>> I run my map-reduce code by below commands :
>>
>> javac -classpath
>> /home/hadoop/project/hadoop-0.20.2/hadoop-0.20.2-core.jar://home/hadoop/
>> project/hadoop-0.20.2/jcuda_1.1_linux64/jcuda.jar:/home/hadoop/project/h
>> adoop-0.20.2/lib/commons-cli-1.2.jar
>> -d wordcount_classes1/ WordCount.java
>>
>> jar -cvf wordcount1.jar -C wordcount_classes1/ .
>>
>> bin/hadoop jar wordcount1.jar org.myorg.WordCount /user/hadoop/gutenberg
>> /user/hadoop/output1
>>
>>
>> Please guide how to achieve this.
>>
>>
>>
>> Thanks & best Regards,
>>
>> Adarsh Sharma
>>
>>
>
>

Reply via email to