On 11/10/06, Milind Bhandarkar <[EMAIL PROTECTED]> wrote:
The namenode warnings could be a result of one of the two scenarios: first, you may have started multiple datanodes on a single machine, and therefore the number of machines in DFS and number of datanodes are not in sync. This problem is also fixed recently in a patch to Hadoop-382. Second, the datanodes do not have enough available diskspace to store a block. In any case, the namenode warnings you mentioned should not result in the exception you are seeing. You can check using "bin/hadoop dfs -ls" command to see if the input directory for the map (/tmp/wcin) really exists. - Milind On Nov 9, 2006, at 8:37 AM, howard chen wrote: > > Hello, > > I followed your instruction, now the namenode can be started, good! > > but when i invoke the example, > > e.g. > > bin/hadoop --config ... jar hadoop-0.8.0-examples.jar wordcount -m 1 > -r 1 /tmp/wcin/ /tmp/wcout/ > > exceptions thrown: > > java.io.IOException: Input directory /tmp/wcin in server01:50000 is > invalid. > at org.apache.hadoop.mapred.JobClient.submitJob > (JobClient.java:311) > at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java: > 368) > at org.apache.hadoop.examples.WordCount.main(WordCount.java: > 143) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at sun.reflect.NativeMethodAccessorImpl.invoke > (NativeMethodAccessorImpl.java:39) > at sun.reflect.DelegatingMethodAccessorImpl.invoke > (DelegatingMethodAccessorImpl.java:25) > at java.lang.reflect.Method.invoke(Method.java:585) > at org.apache.hadoop.util.ProgramDriver > $ProgramDescription.invoke(ProgramDriver.java:71) > at org.apache.hadoop.util.ProgramDriver.driver > (ProgramDriver.java:143) > at org.apache.hadoop.examples.ExampleDriver.main > (ExampleDriver.java:41) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at sun.reflect.NativeMethodAccessorImpl.invoke > (NativeMethodAccessorImpl.java:39) > at sun.reflect.DelegatingMethodAccessorImpl.invoke > (DelegatingMethodAccessorImpl.java:25) > at java.lang.reflect.Method.invoke(Method.java:585) > at org.apache.hadoop.util.RunJar.main(RunJar.java:149) > > and from the namenode log, i found many... > > 2006-11-10 00:17:57,110 WARN org.apache.hadoop.fs.FSNamesystem: Zero > targets found, forbidden1.size=4 forbidden2.size()=0 > 2006-11-10 00:17:57,110 WARN org.apache.hadoop.fs.FSNamesystem: Zero > targets found, forbidden1.size=4 forbidden2.size()=0 > 2006-11-10 00:17:57,110 WARN org.apache.hadoop.fs.FSNamesystem: Zero > targets found, forbidden1.size=4 forbidden2.size()=0 > .... > > thanks first. > >
1. I am using hadoop-0.8.0.tar.gz (DL from hadoop site, not svn) 2. I tried some commands to put files in the DFS, e.g. $ hadoop --config.... dfs -mkdir /tmp/test-mkdir $ hadoop -config.... dfs -copyFromLocal ./foo.txt /tmp/test-mkdir/foo.txt $ hadoop -config.... dfs -put ./some_directory /tmp/test-mkdir/foo.txt/some_directory but when i use hadoop --config... dfs -ls it returns 06/11/10 12:02:13 INFO ipc.Client: org.apache.hadoop.io.ObjectWritableConnection culler maxidletime= 1000ms 06/11/10 12:02:13 INFO ipc.Client: org.apache.hadoop.io.ObjectWritable Connection Culler: starting Found 0 items 3. when reporting, e.g. dfs -report, it returns Total effective bytes: 1641 (1.60 k) Effective replication multiplier: 2715720.889092017 ------------------------------------------------- Datanodes available: 4 Name: server4:50010 Total raw bytes: 37843353600 (35.24 GB) Used raw bytes: 1045558989 (997.12 MB) % used: 2.76% Last contact: Fri Nov 10 12:03:06 HKT 2006 Name: server2:50010 Total raw bytes: 37843353600 (35.24 GB) Used raw bytes: 1087923591 (1.01 GB) % used: 2.87% Last contact: Fri Nov 10 12:03:08 HKT 2006 Name: server3:50010 Total raw bytes: 37843353600 (35.24 GB) Used raw bytes: 1087903520 (1.01 GB) % used: 2.87% Last contact: Fri Nov 10 12:03:09 HKT 2006 Name: server1:50010 Total raw bytes: 37843353600 (35.24 GB) Used raw bytes: 1235111879 (1.15 GB) % used: 3.26% Last contact: Fri Nov 10 12:03:09 HKT 2006 Thanks for any comments!
