You could also try creating a lib directory with the dependant jar and package that along with the job's jar file. Please refer to this blog post for information: http://www.cloudera.com/blog/2011/01/how-to-include-third-party-libraries-in-your-map-reduce-job/
On Wed, Sep 26, 2012 at 4:57 PM, sudha sadhasivam <sudhasadhasi...@yahoo.com > wrote: > Sir > We have also tried the option of putting JCUBLAA in hadoop jar. > Still it does not recognise. > We would be thankful if you could provide us with a sample exercise on the > same with steps for execution > I am herewith attaching the error file > Thanking you > with warm regards > Dr G sudha Sadasivam > > > --- On *Tue, 9/25/12, Chen He <airb...@gmail.com>* wrote: > > > From: Chen He <airb...@gmail.com> > Subject: Re: Hadoop and Cuda , JCuda (CPU+GPU architecture) > To: "sudha sadhasivam" <sudhasadhasi...@yahoo.com> > Cc: common-user@hadoop.apache.org > Date: Tuesday, September 25, 2012, 9:01 PM > > > Hi Sudha > > Good question. > > First of all, you need to specify clearly about your Hadoop environment, > (pseudo distributed or real cluster) > > Secondly, you need to clearly understand how hadoop load job's jar file to > all worker nodes, it only copy the jar file to worker nodes. It does not > contain the jcuda.jar file. MapReduce program may not know where it is even > you specify the jcuda.jar file in our worker node classpath. > > I prefer you can include the Jcuda.jar into your wordcount.jar. Then when > Hadoop copy the wordcount.jar file to all worker nodes' temporary working > directory, you do not need to worry about this issue. > > Let me know if you meet further question. > > Chen > > On Tue, Sep 25, 2012 at 12:38 AM, sudha sadhasivam < > sudhasadhasi...@yahoo.com <http://mc/compose?to=sudhasadhasi...@yahoo.com>> > wrote: > > > Sir > > We tried to integrate hadoop and JCUDA. > > We tried a code from > > > > > > > http://code.google.com/p/mrcl/source/browse/trunk/hama-mrcl/src/mrcl/mrcl/?r=76 > > > > We re able to compile. We are not able to execute. It does not recognise > > JCUBLAS.jar. We tried setting the classpath > > We are herewith attaching the procedure for the same along with errors > > Kindly inform us how to proceed. It is our UG project > > Thanking you > > Dr G sudha Sadasivam > > > > --- On *Mon, 9/24/12, Chen He > > <airb...@gmail.com<http://mc/compose?to=airb...@gmail.com>>* > wrote: > > > > > > From: Chen He <airb...@gmail.com<http://mc/compose?to=airb...@gmail.com> > > > > Subject: Re: Hadoop and Cuda , JCuda (CPU+GPU architecture) > > To: > > common-user@hadoop.apache.org<http://mc/compose?to=common-user@hadoop.apache.org> > > Date: Monday, September 24, 2012, 9:03 PM > > > > > > http://wiki.apache.org/hadoop/CUDA%20On%20Hadoop > > > > On Mon, Sep 24, 2012 at 10:30 AM, Oleg Ruchovets > > <oruchov...@gmail.com<http://mc/compose?to=oruchov...@gmail.com> > <http://mc/compose?to=oruchov...@gmail.com> > > >wrote: > > > > > Hi > > > > > > I am going to process video analytics using hadoop > > > I am very interested about CPU+GPU architercute espessially using CUDA > ( > > > http://www.nvidia.com/object/cuda_home_new.html) and JCUDA ( > > > http://jcuda.org/) > > > Does using HADOOP and CPU+GPU architecture bring significant > performance > > > improvement and does someone succeeded to implement it in production > > > quality? > > > > > > I didn't fine any projects / examples to use such technology. > > > If someone could give me a link to best practices and example using > > > CUDA/JCUDA + hadoop that would be great. > > > Thanks in advane > > > Oleg. > > > > > > > > >