In your eclipse classpath core-site.xml is there?
Directory which contains site xmls should be there in classpath. Not
directly xml files.

Make sure fs.defaultFS points to correct hdfs path

Regards,
Vinayakumar B
On Nov 2, 2013 5:21 PM, "Harsh J" <ha...@cloudera.com> wrote:

> Your job configuration isn't picking up or passing the right default
> filesystem (fs.default.name or fs.defaultFS) before submitting the
> job. As a result, the non-configured default of local filesystem is getting
> picked up for paths you intended to look for on HDFS.
>
> On Friday, November 1, 2013, Omar@Gmail wrote:
>
>> Hi,
>>
>> Running from inside IDE (intellij idea) getting exception, see below:
>>
>> In the program arguments I specify 'input output'
>>
>> Of course 'input' does exist in HDFS with data file in it.
>>
>> But the code is trying access a directory form local project file system
>> location not form HDFS.
>>
>> Please let me know what I have done wrong.
>>
>> Regards
>> Omar
>>
>> Nov 01, 2013 4:31:36 PM org.apache.hadoop.security.UserGroupInformation
>> doAs
>> SEVERE: PriviledgedActionException as:omarnetbox
>> cause:org.apache.hadoop.mapreduce.lib.input.InvalidInputException: Input
>> path does not exist: file:/Users/omarnetbox/work/projects/hadoop/mrjob/input
>> Exception in thread "main"
>> org.apache.hadoop.mapreduce.lib.input.InvalidInputException: Input path
>> does not exist: file:/Users/omarnetbox/work/projects/hadoop/mrjob/input
>>     at
>> org.apache.hadoop.mapreduce.lib.input.FileInputFormat.listStatus(FileInputFormat.java:235)
>>     at
>> org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits(FileInputFormat.java:252)
>>     at
>> org.apache.hadoop.mapred.JobClient.writeNewSplits(JobClient.java:1054)
>>
>
>
> --
> Sent from my Galaxy S
>

Reply via email to