Sonal Goyal wrote:
Adarsh,

Are you trying to distribute both the native library and the jcuda.jar?
Could you please explain your job's dependencies?

Yes Of course , I am trying to run a Juda program in Hadoop Cluster as I am able to run it simple through simple javac & java commands at standalone machine by setting PATH & LD_LIBRARY_PATH varibale to */usr/local/cuda/lib* & */home/hadoop/project/jcuda_1.1_linux *folder.

I listed the contents & jars in these directories :

[hadoop@cuda1 lib]$ pwd
/usr/local/cuda/lib
[hadoop@cuda1 lib]$ ls -ls
total 158036
4 lrwxrwxrwx 1 root root 14 Feb 23 19:37 libcublas.so -> libcublas.so.3 4 lrwxrwxrwx 1 root root 19 Feb 23 19:37 libcublas.so.3 -> libcublas.so.3.2.16
81848 -rwxrwxrwx 1 root root 83720712 Feb 23 19:37 libcublas.so.3.2.16
4 lrwxrwxrwx 1 root root 14 Feb 23 19:37 libcudart.so -> libcudart.so.3 4 lrwxrwxrwx 1 root root 19 Feb 23 19:37 libcudart.so.3 -> libcudart.so.3.2.16
 424 -rwxrwxrwx 1 root root   423660 Feb 23 19:37 libcudart.so.3.2.16
4 lrwxrwxrwx 1 root root 13 Feb 23 19:37 libcufft.so -> libcufft.so.3 4 lrwxrwxrwx 1 root root 18 Feb 23 19:37 libcufft.so.3 -> libcufft.so.3.2.16
27724 -rwxrwxrwx 1 root root 28351780 Feb 23 19:37 libcufft.so.3.2.16
4 lrwxrwxrwx 1 root root 14 Feb 23 19:37 libcurand.so -> libcurand.so.3 4 lrwxrwxrwx 1 root root 19 Feb 23 19:37 libcurand.so.3 -> libcurand.so.3.2.16
4120 -rwxrwxrwx 1 root root  4209384 Feb 23 19:37 libcurand.so.3.2.16
4 lrwxrwxrwx 1 root root 16 Feb 23 19:37 libcusparse.so -> libcusparse.so.3 4 lrwxrwxrwx 1 root root 21 Feb 23 19:37 libcusparse.so.3 -> libcusparse.so.3.2.16
43048 -rwxrwxrwx 1 root root 44024836 Feb 23 19:37 libcusparse.so.3.2.16
172 -rwxrwxrwx 1 root root 166379 Nov 25 11:29 libJCublas-linux-x86_64.so 152 -rwxrwxrwx 1 root root 144179 Nov 25 11:29 libJCudaDriver-linux-x86_64.so
  16 -rwxrwxrwx 1 root root     8474 Mar 31  2009 libjcudafft.so
136 -rwxrwxrwx 1 root root 128672 Nov 25 11:29 libJCudaRuntime-linux-x86_64.so
  80 -rwxrwxrwx 1 root root    70381 Mar 31  2009 libjcuda.so
  44 -rwxrwxrwx 1 root root    38039 Nov 25 11:29 libJCudpp-linux-x86_64.so
  44 -rwxrwxrwx 1 root root    38383 Nov 25 11:29 libJCufft-linux-x86_64.so
48 -rwxrwxrwx 1 root root 43706 Nov 25 11:29 libJCurand-linux-x86_64.so 140 -rwxrwxrwx 1 root root 133280 Nov 25 11:29 libJCusparse-linux-x86_64.so

And the second folder as :

[hadoop@cuda1 jcuda_1.1_linux64]$ pwd
/home/hadoop/project/hadoop-0.20.2/jcuda_1.1_linux64
[hadoop@cuda1 jcuda_1.1_linux64]$ ls -ls
total 200
8 drwxrwxrwx 6 hadoop hadoop  4096 Feb 24 01:44 doc
8 drwxrwxrwx 3 hadoop hadoop  4096 Feb 24 01:43 examples
32 -rwxrwxr-x 1 hadoop hadoop 28484 Feb 24 01:43 jcuda.jar
4 -rw-rw-r-- 1 hadoop hadoop     0 Mar  1 21:27 libcublas.so.3
4 -rw-rw-r-- 1 hadoop hadoop     0 Mar  1 21:27 libcublas.so.3.2.16
4 -rw-rw-r-- 1 hadoop hadoop     0 Mar  1 21:27 libcudart.so.3
4 -rw-rw-r-- 1 hadoop hadoop     0 Mar  1 21:27 libcudart.so.3.2.16
4 -rw-rw-r-- 1 hadoop hadoop     0 Mar  1 21:27 libcufft.so.3
4 -rw-rw-r-- 1 hadoop hadoop     0 Mar  1 21:27 libcufft.so.3.2.16
4 -rw-rw-r-- 1 hadoop hadoop     0 Mar  1 21:27 libcurand.so.3
4 -rw-rw-r-- 1 hadoop hadoop     0 Mar  1 21:27 libcurand.so.3.2.16
4 -rw-rw-r-- 1 hadoop hadoop     0 Mar  1 21:27 libcusparse.so.3
4 -rw-rw-r-- 1 hadoop hadoop     0 Mar  1 21:27 libcusparse.so.3.2.16
16 -rwxr-xr-x 1 hadoop hadoop  8474 Mar  1 04:12 libjcudafft.so
80 -rwxr-xr-x 1 hadoop hadoop 70381 Mar  1 04:11 libjcuda.so
8 -rwxrwxr-x 1 hadoop hadoop   811 Feb 24 01:43 README.txt
8 drwxrwxrwx 2 hadoop hadoop  4096 Feb 24 01:43 resources
[hadoop@cuda1 jcuda_1.1_linux64]$

I think Hadoop would not able to recognize *jcuda.jar* in Tasktracker process. Please guide me how to make it available in it.


Thanks & best Regards,
Adrash Sharma

Thanks and Regards,
Sonal
<https://github.com/sonalgoyal/hiho>Hadoop ETL and Data
Integration<https://github.com/sonalgoyal/hiho>
Nube Technologies <http://www.nubetech.co>

<http://in.linkedin.com/in/sonalgoyal>





On Mon, Feb 28, 2011 at 6:54 PM, Adarsh Sharma <[email protected]>wrote:

Sonal Goyal wrote:

Hi Adarsh,

I think your mapred.cache.files property has an extra space at the end.
Try
removing that and let us know how it goes.
Thanks and Regards,
Sonal
<https://github.com/sonalgoyal/hiho>Hadoop ETL and Data
Integration<https://github.com/sonalgoyal/hiho>
Nube Technologies <http://www.nubetech.co>

<http://in.linkedin.com/in/sonalgoyal>




Thanks a Lot Sonal but it doesn't succeed.
Please if possible tell me the proper steps that are need to be followed
after Configuring Hadoop Cluster.

I don't believe that a simple commands succeeded as

[root@cuda1 hadoop-0.20.2]# javac EnumDevices.java
[root@cuda1 hadoop-0.20.2]# java EnumDevices
Total number of devices: 1
Name: Tesla C1060
Version: 1.3
Clock rate: 1296000 MHz
Threads per block: 512


but in Map-reduce job it fails :

11/02/28 18:42:47 INFO mapred.JobClient: Task Id :
attempt_201102281834_0001_m_000001_2, Status : FAILED
java.lang.RuntimeException: java.lang.reflect.InvocationTargetException
      at
org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:115)
      at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:569)
      at org.apache.hadoop.mapred.MapTask.run(MapTask.java:305)
      at org.apache.hadoop.mapred.Child.main(Child.java:170)
Caused by: java.lang.reflect.InvocationTargetException
      at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
Method)
      at
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
      at
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
      at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
      at
org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:113)
      ... 3 more
Caused by: java.lang.UnsatisfiedLinkError: no jcuda in java.library.path
      at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1734)
      at java.lang.Runtime.loadLibrary0(Runtime.java:823)
      at java.lang.System.loadLibrary(System.java:1028)
      at jcuda.driver.CUDADriver.<clinit>(CUDADriver.java:909)
      at jcuda.CUDA.init(CUDA.java:62)
      at jcuda.CUDA.<init>(CUDA.java:42)




Thanks & best Regards,

Adarsh Sharma

On Mon, Feb 28, 2011 at 5:06 PM, Adarsh Sharma <[email protected]
wrote:

Thanks Sanjay, it seems i found the root cause.

But I result in following error:

[hadoop@ws37-mah-lin hadoop-0.20.2]$ bin/hadoop jar wordcount1.jar
org.myorg.WordCount /user/hadoop/gutenberg /user/hadoop/output1
Exception in specified URI's java.net.URISyntaxException: Illegal
character
in path at index 36: hdfs://192.168.0.131:54310/jcuda.jar
     at java.net.URI$Parser.fail(URI.java:2809)
     at java.net.URI$Parser.checkChars(URI.java:2982)
     at java.net.URI$Parser.parseHierarchical(URI.java:3066)
     at java.net.URI$Parser.parse(URI.java:3014)
     at java.net.URI.<init>(URI.java:578)
     at
org.apache.hadoop.util.StringUtils.stringToURI(StringUtils.java:204)
     at

org.apache.hadoop.filecache.DistributedCache.getCacheFiles(DistributedCache.java:593)
     at

org.apache.hadoop.mapred.JobClient.configureCommandLineOptions(JobClient.java:638)
     at
org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:761)
     at org.apache.hadoop.mapreduce.Job.submit(Job.java:432)
     at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:447)
     at org.myorg.WordCount.main(WordCount.java:59)
     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
     at

sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
     at

sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
     at java.lang.reflect.Method.invoke(Method.java:597)
     at org.apache.hadoop.util.RunJar.main(RunJar.java:156)

Exception in thread "main" java.lang.NullPointerException
     at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:176)
     at

org.apache.hadoop.filecache.DistributedCache.getTimestamp(DistributedCache.java:506)
     at

org.apache.hadoop.mapred.JobClient.configureCommandLineOptions(JobClient.java:640)
     at
org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:761)
     at org.apache.hadoop.mapreduce.Job.submit(Job.java:432)
     at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:447)
     at org.myorg.WordCount.main(WordCount.java:59)
     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
     at

sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
     at

sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
     at java.lang.reflect.Method.invoke(Method.java:597)
     at org.apache.hadoop.util.RunJar.main(RunJar.java:156)

Please check my attached mapred-site.xml


Thanks & best regards,

Adarsh Sharma



Kaluskar, Sanjay wrote:



You will probably have to use distcache to distribute your jar to all
the nodes too. Read the distcache documentation; Then on each node you
can add the new jar to the java.library.path through
mapred.child.java.opts.

You need to do something like the following in mapred-site.xml, where
fs-uri is the URI of the file system (something like
host.mycompany.com:54310).

<property>
 <name>mapred.cache.files</name>
 <value>hdfs://fs-uri/jcuda/jcuda.jar#jcuda.jar </value>
</property>
<property>
 <name>mapred.create.symlink</name>
 <value>yes</value>
</property>
<property>
 <name>mapred.child.java.opts</name>
 <value>-Djava.library.path=jcuda.jar</value>
</property>


-----Original Message-----
From: Adarsh Sharma [mailto:[email protected]] Sent: 28 February
2011 16:03
To: [email protected]
Subject: Setting java.library.path for map-reduce job

Dear all,

I want to set some extra jars in java.library.path , used while running
map-reduce program in Hadoop Cluster.

I got a exception entitled "no jcuda in java.library.path" in each map
task.

I run my map-reduce code by below commands :

javac -classpath
/home/hadoop/project/hadoop-0.20.2/hadoop-0.20.2-core.jar://home/hadoop/
project/hadoop-0.20.2/jcuda_1.1_linux64/jcuda.jar:/home/hadoop/project/h
adoop-0.20.2/lib/commons-cli-1.2.jar
-d wordcount_classes1/ WordCount.java

jar -cvf wordcount1.jar -C wordcount_classes1/ .

bin/hadoop jar wordcount1.jar org.myorg.WordCount /user/hadoop/gutenberg
/user/hadoop/output1


Please guide how to achieve this.



Thanks & best Regards,

Adarsh Sharma







Reply via email to