Hey,

On Thu, Feb 24, 2011 at 10:13 AM, Adarsh Sharma
<adarsh.sha...@orkash.com> wrote:
> Dear all,
>
> I am confused about the concepts used while running map-reduce jobs in
> Hadoop Cluster.
> I attached a program that is used to run in Hadoop Cluster. Please find the
> attachment.
> My PATH Variable shows that it includes all libraries as
>
> [hadoop@cuda1 ~]$ echo $PATH
> /usr/kerberos/bin:/usr/local/bin:/bin:/usr/bin:/home/hadoop/project/hadoop-0.20.2/jcuda.jar:/usr/local/cuda/lib:/home/hadoop/bin
> [hadoop@cuda1 ~]$

AFAIK, on Linux, java.library.path is loaded from LD_LIBRARY_PATH and
on Windows it is PATH.
Also, build/lib/native or lib/native folders are added if available to
Hadoop's java.library.path if launched using the bin scripts. You can
place your native libraries there (under the proper Java platform).

-- 
Harsh J
www.harshj.com

Reply via email to