Pat,

Perhaps for some reason your program isn't picking up the right filesystem as 
it starts. What does "hadoop classpath" print?

As a workaround, you can also pass an explicit FS to your command:
input -> hdfs://host:port/user/path/to/input
output -> hdfs://host:port/user/path/to/output

And then that should work.

On 24-Dec-2011, at 5:10 AM, Pat Flaherty wrote:

> Hi,
> 
> Installed 0.22.0 on CentOS 5.7.  I can start dfs and mapred and see their 
> processes.
> 
> Ran the first grep example: bin/hadoop jar hadoop-*-examples.jar grep input 
> output 'dfs[a-z.]+'.  It seems the correct jar name is 
> hadoop-mapred-examples-0.22.0.jar - there are no other hadoop*examples*.jar 
> files in HADOOP_HOME.
> 
> Didn't work.  Then found and tried pi (compute pi) - that works, so my 
> installation is to some degree of approximation good.
> 
> Back to grep.  It fails with 
> 
>> java.io.FileNotFoundException: File does not exist: /user/MyId/input/conf
> 
> Found and ran bin/hadoop fs -ls.  OK these directory names are internal to 
> hadoop (I assume) because Linux has no idea of /user.
> 
> And the directory is there - but the program is failing.
> 
> Any suggestions; where to start; etc?
> 
> Thanks - Pat

Reply via email to