So looking at the source and single stepping through a simple test case I have 
to make a directory using FileSystem.mkdir()

I see that I am getting a 404 (nothing new there).  However the code that is 
throwing this looks like this below.  Note the comment about being "brittle".  
Seems its looking for ResponseCode=404 except in this case I am getting 
"ResponseCode: 404"

Should I just forward this to the developer list or has anyone come over this 
before?

public FileMetadata retrieveMetadata(String key) throws IOException {
    try {
      S3Object object = s3Service.getObjectDetails(bucket, key);
      return new FileMetadata(key, object.getContentLength(),
          object.getLastModifiedDate().getTime());
    } catch (S3ServiceException e) {
      // Following is brittle. Is there a better way?
      if (e.getMessage().contains("ResponseCode=404")) {
        return null;
      }
      if (e.getCause() instanceof IOException) {
        throw (IOException) e.getCause();
      }
      throw new S3Exception(e);
    }
  }




On Aug 28, 2012, at 12:14 AM, Chris Collins <[email protected]> wrote:

> Hi I am trying to use the Hadoop filesystem abstraction with S3 but in my 
> tinkering I am not having a great deal of success.  I am particularly 
> interested in the ability to mimic a directory structure (since s3 native 
> doesnt do it).
> 
> Can anyone point me to some good example usage of Hadoop FileSystem with s3?
> 
> I created a few directories using transit and AWS S3 console for test.  Doing 
> a liststatus of the bucket returns a FileStatus object of the directory 
> created but if I try to do a liststatus of that path I am getting a 404:
> 
> org.apache.hadoop.fs.s3.S3Exception: org.jets3t.service.S3ServiceException: 
> Request Error. HEAD '/aaaa' on Host ....
> 
> Probably not the best list to look for help, any clues appreciated.
> 
> C

Reply via email to