Hello,

I am in need of some clarifications on how to run a hadoop job locally.

The cluster was originally set up to have two nodes, where one of them
also acts as the master node and job tracker.

According to the wiki, I can run a job locally by altering
"mapred.job.tracker" and "fs.default.name" properties to "local" in
hadoop-site.xml.  But when I start the server, it stack dumped:

localhost: starting secondarynamenode, logging to /home/blahblahblah
localhost: Exception in thread "main" java.lang.RuntimeException: Not
a host:port pair: local

Apparently it didn't like the value "local"?

Also, the wiki noted that all these XML configuration files should be
included somewhere in the class path to the job, does it mean I need
to include the XMLs as I do jars?

Thank

-- Jim

Reply via email to