Hi, i am not using any client systems so i am not sure if i need to configure that part.
On Sat, Aug 18, 2012 at 10:50 PM, Shashwat Shriparv < [email protected]> wrote: > What is your host file configuration the problem related to namenode is > almost always due to the host configuration…**** > > ** ** > > *From:* rahul p [mailto:[email protected]] > *Sent:* Saturday, August 18, 2012 5:07 PM > *To:* [email protected] > > *Subject:* Re: hadoop 1.0.3 config exception**** > > ** ** > > Hi Ben,**** > > Can you help me resolve this issue.**** > > i am new to hadoop and java.**** > > i facing issue in bringing up my NameNode.**** > > ** ** > > On Fri, Aug 17, 2012 at 11:59 PM, Ben Cuthbert <[email protected]> > wrote:**** > > All**** > > ** ** > > We are getting the following show in when we talk to hadoop 1.0.3**** > > ** ** > > Seems it relates to these lines in Configuration.java **** > > ** ** > > public > Configuration<http://fossies.org/dox/hadoop-1.0.3/classorg_1_1apache_1_1hadoop_1_1conf_1_1Configuration.html#a2bf70fe686e6ab0cfd90cde27a6e3b1d> > (boolean loadDefaults) {**** > > 225 this.loadDefaults = loadDefaults;**** > > 226 if (LOG.isDebugEnabled()) {**** > > 227 > LOG.debug(StringUtils<http://fossies.org/dox/hadoop-1.0.3/classorg_1_1apache_1_1hadoop_1_1util_1_1StringUtils.html> > .stringifyException<http://fossies.org/dox/hadoop-1.0.3/classorg_1_1apache_1_1hadoop_1_1util_1_1StringUtils.html#afe40f66d95f868187f9ba38c0a107486> > (new IOException("config()")));**** > > 228 }**** > > 229 > synchronized(Configuration<http://fossies.org/dox/hadoop-1.0.3/classorg_1_1apache_1_1hadoop_1_1conf_1_1Configuration.html>.class) > {**** > > 230 REGISTRY.put(this, null);**** > > 231 }**** > > 232 this.storeResource = false;**** > > 233 }**** > > ** ** > > ** ** > > Why is this here?**** > > ** ** > > ** ** > > 2012-08-17 16:53:11,133 (hdfs-hdfs-sink-call-runner-4) [DEBUG - > org.apache.hadoop.conf.Configuration.<init>(Configuration.java:227)] > java.io.IOException: config()**** > > at org.apache.hadoop.conf.Configuration.<init>(Configuration.java:227)**** > > at org.apache.hadoop.conf.Configuration.<init>(Configuration.java:214)**** > > at org.apache.flume.sink.hdfs.BucketWriter.doOpen(BucketWriter.java:170)** > ** > > at org.apache.flume.sink.hdfs.BucketWriter.access$000(BucketWriter.java:48) > **** > > at org.apache.flume.sink.hdfs.BucketWriter$1.run(BucketWriter.java:155)*** > * > > at org.apache.flume.sink.hdfs.BucketWriter$1.run(BucketWriter.java:152)*** > * > > at > org.apache.flume.sink.hdfs.BucketWriter.runPrivileged(BucketWriter.java:125) > **** > > at org.apache.flume.sink.hdfs.BucketWriter.open(BucketWriter.java:152)**** > > at org.apache.flume.sink.hdfs.BucketWriter.append(BucketWriter.java:307)** > ** > > at org.apache.flume.sink.hdfs.HDFSEventSink$1.call(HDFSEventSink.java:717) > **** > > at org.apache.flume.sink.hdfs.HDFSEventSink$1.call(HDFSEventSink.java:714) > **** > > at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)**** > > at java.util.concurrent.FutureTask.run(FutureTask.java:138)**** > > at > java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) > **** > > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) > **** > > at java.lang.Thread.run(Thread.java:680)**** > > ** ** > > ** ** >
