Anyone know why I would be getting an error doing a filesystem.open on a file with a s3n prefix?
for the input path "s3n://backlog.dev/1296648900000/" - I get the following stacktrace: java.lang.IllegalArgumentException: This file system object (hdfs://ip-10-114-89-36.ec2.internal:9000) does not support access to the request path 's3n://backlog.dev/1296648900000/32763897924550656' You possibly called FileSystem.get(conf) when you should of called FileSystem.get(uri, conf) to obtain a file system supporting your path. at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:351) at org.apache.hadoop.hdfs.DistributedFileSystem.checkPath(DistributedFileSystem.java:99) at org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:155) at org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:178) at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:396) at analytics.hadoop.socialdata.RawSignalFileInputFormat$MultiFileLineRecordReader.<init>(RawSignalFileInputFormat.java:53) at analytics.hadoop.socialdata.RawSignalFileInputFormat.getRecordReader(RawSignalFileInputFormat.java:22) at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:343) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:312) at org.apache.hadoop.mapred.Child.main(Child.java:170) Incidentally, I'm using on elastic mapreduce with hadoop version 0.20 (which I assume is the latest 0.20 version).