I'm trying to figure out how to achieve the following from a Java client,
1. My app (which is a web server) starts up
2. As part of startup my jar file, which includes my map reduce classes are 
distributed to hadoop nodes 
3. My web app uses map reduce to extract data without the performance overhead 
of each job deploying a jar file, via setJar(), setJarByClass()

It looks like DistributedCache() has potential but the need for commands like 
'hadoop fs -copyFromLocal ...' and the API methods like 
'.getLocalCacheArchives()' look to be at odds with my scenario. Any thoughts?

-Peter

Reply via email to