check in namenode:50030 if it appears there its not running in localmode else it is
*Thanks & Regards * ∞ Shashwat Shriparv On Sun, Apr 28, 2013 at 1:18 AM, sudhakara st <[email protected]>wrote: > Hello Kevin, > > In the case: > > JobClient client = new JobClient(); > JobConf conf - new JobConf(WordCount.class); > > Job client(default in local system) picks configuration information by > referring HADOOP_HOME in local system. > > if your job configuration like this: > *Configuration conf = new Configuration();* > *conf.set("fs.default.name", "hdfs://name_node:9000");* > *conf.set("mapred.job.tracker", "job_tracker_node:9001");* > > It pickups configuration information by referring HADOOP_HOME in > specified namenode and job tracker. > > Regards, > Sudhakara.st > > > On Sat, Apr 27, 2013 at 2:52 AM, Kevin Burton <[email protected]>wrote: > >> It is hdfs://devubuntu05:9000. Is this wrong? Devubuntu05 is the name of >> the host where the NameNode and JobTracker should be running. It is also >> the host where I am running the M/R client code. >> >> On Apr 26, 2013, at 4:06 PM, Rishi Yadav <[email protected]> wrote: >> >> check core-site.xml and see value of fs.default.name. if it has >> localhost you are running locally. >> >> >> >> >> On Fri, Apr 26, 2013 at 1:59 PM, <[email protected]> wrote: >> >>> I suspect that my MapReduce job is being run locally. I don't have any >>> evidence but I am not sure how the specifics of my configuration are >>> communicated to the Java code that I write. Based on the text that I have >>> read online basically I start with code like: >>> >>> JobClient client = new JobClient(); >>> JobConf conf - new JobConf(WordCount.class); >>> . . . . . >>> >>> Where do I communicate the configuration information so that the M/R job >>> runs on the cluster and not locally? Or is the configuration location >>> "magically determined"? >>> >>> Thank you. >>> >> >> > > > -- > > Regards, > ..... Sudhakara.st > >
