The mapreduce job code I have (java app) depends on other libraries. It runs 
fine when the job is run locally
& but when I'm running on a true distributed setup..it's failing on 
dependencies..Do I have to put all the
libraies, propertty files (dependent) of my application in HADOOP_CLASSPATH 
..for the mapreduce to run  in a cluster?

thanks
venkatesh


Reply via email to