Ah, nice catch. I'll go fix that message now :)

On Wed, Feb 9, 2011 at 4:50 PM, Jeremy Hanna <jeremy.hanna1...@gmail.com> wrote:
> Bah - you're right.  I don't know why I thought the real error was obscured, 
> besides being distracted by "you should of" should be "you should have".
>
> Thanks and apologies...
>
> Jeremy
>
> On Feb 9, 2011, at 6:10 PM, Andrew Hitchcock wrote:
>
>> "This file system object (hdfs://ip-10-114-89-36.ec2.internal:9000)
>> does not support access to the request path
>> 's3n://backlog.dev/1296648900000/32763897924550656' You possibly
>> called FileSystem.get(conf) when you should of called
>> FileSystem.get(uri, conf) to obtain a file system supporting your
>> path."
>>
>> That explains the error. You should always use the two parameter get
>> method when requesting FileSystem objects.
>>
>> Also, Elastic MapReduce is based on the Hadoop 0.20 branch. It has all
>> the patches from Hadoop 0.20.2 plus some additional ones from that
>> branch and other places.
>>
>> Andrew
>>
>> On Wed, Feb 9, 2011 at 3:23 PM, Jeremy Hanna <jeremy.hanna1...@gmail.com> 
>> wrote:
>>> Anyone know why I would be getting an error doing a filesystem.open on a 
>>> file with a s3n prefix?
>>>
>>> for the input path "s3n://backlog.dev/1296648900000/" - I get the following 
>>> stacktrace:
>>>
>>> java.lang.IllegalArgumentException: This file system object 
>>> (hdfs://ip-10-114-89-36.ec2.internal:9000) does not support access to the 
>>> request path 's3n://backlog.dev/1296648900000/32763897924550656' You 
>>> possibly called FileSystem.get(conf) when you should of called 
>>> FileSystem.get(uri, conf) to obtain a file system supporting your path.
>>>        at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:351)
>>>        at 
>>> org.apache.hadoop.hdfs.DistributedFileSystem.checkPath(DistributedFileSystem.java:99)
>>>        at 
>>> org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:155)
>>>        at 
>>> org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:178)
>>>        at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:396)
>>>        at 
>>> analytics.hadoop.socialdata.RawSignalFileInputFormat$MultiFileLineRecordReader.<init>(RawSignalFileInputFormat.java:53)
>>>        at 
>>> analytics.hadoop.socialdata.RawSignalFileInputFormat.getRecordReader(RawSignalFileInputFormat.java:22)
>>>        at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:343)
>>>        at org.apache.hadoop.mapred.MapTask.run(MapTask.java:312)
>>>        at org.apache.hadoop.mapred.Child.main(Child.java:170)
>>>
>>> Incidentally, I'm using on elastic mapreduce with hadoop version 0.20 
>>> (which I assume is the latest 0.20 version).
>
>

Reply via email to