Hi,

Maybe someone here can help me with a rather noob question. Where do I have to put my custom jar to run it as a map/reduce job? Anywhere and then specifying the HADOOP_CLASSPATH variable in hadoop-env.sh?

Also, since I am using the Hadoop API already from our server code, it seems natural to launch jobs from within our code. Are there any issue with that? I assume I have to copy the jar files first and make them available as per my question above, but then I am ready to start it from my own code?

I have read most Wiki entries and while the actual workings are described quite nicely, I could not find an answer to the questions above. The demos are already in place and can be started as is without the need of making them available.

Again, I apologize for being a noobie.

Lars

Reply via email to