Hi,

I want to deploy my map reduce job jar on the Hadoop cluster. I've always done 
that by doing the following -

1. Copying the job jar to all datanodes
2. Having the job jar on the hadoop classpath on all machines.

Isn't hadoop capable of copying over the job jar to all machines in the 
cluster? This is what I read (that job tracker copies the job jar, etc) , but 
if I don't do the above the task trackers cannot find the job. I know I am 
missing something.

Could someone please let me know how I can run my job without having to copy it 
over all machines in the cluster?

Thanks,
Deepika

Reply via email to