Is there a workaround ? I want to run the WordCount sample against a file on my local filesystem. If this is not possible do I need to put my file into HDFS and then point that location to my program ?
Thanks Avinash On 5/25/07, Koji Noguchi <[EMAIL PROTECTED]> wrote:
Doug, I may be wrong, but last time I tried (on 0.12.3), MapRed didn't work for non-default filesystem as an input. (output worked fine.) https://issues.apache.org/jira/browse/HADOOP-71 https://issues.apache.org/jira/browse/HADOOP-1107 Mine failed with "org.apache.hadoop.mapred.InvalidInputException: Input path does not exist". It basically checked the default file system instead of the one passed in. Koji Doug Cutting wrote: > But inputs don't have to be in the default filesystem nor must they be > in HDFS. They need to be in a filesystem that's available to all > nodes. They could be in NFS, S3, or Ceph instead of HDFS. They could > even be in a non-default HDFS system.
