Any comment on this?
On Aug 28, 2012, at 11:43 PM, Chris Collins <[email protected]> wrote:

> I was attempting to use the natives3 file system outside of doing any map 
> reduce tasks.  A simple task of trying to create a directory:
> 
> 
> FileSystem fs = FileSystem.get(uri, conf);
> Path currPath = new Path("/a/b/c");
>  fs.mkdirs(currPath);
> 
> ( I can provide full code if needed).
> 
> Anyway the class Jets3tNativeFileSystemStore attempts to detect if each key 
> part of the object path exists expecting a 404 response if it does not:
> 
> public FileMetadata retrieveMetadata(String key) throws IOException {
>     try {
>       S3Object object = s3Service.getObjectDetails(bucket, key);
>       return new FileMetadata(key, object.getContentLength(),
>           object.getLastModifiedDate().getTime());
>     } catch (S3ServiceException e) {
>       // Following is brittle. Is there a better way?
>       if (e.getMessage().contains("ResponseCode=404")) {
>         return null;
>       }
>       if (e.getCause() instanceof IOException) {
>         throw (IOException) e.getCause();
>       }
>       throw new S3Exception(e);
>     }
>   }
> 
> All version of jets3 I have looked at that seem to have a compatible class 
> structure (don't blow on AWSCredentials) actually return an exception 
> containing ".....ResponseCode: 404....
> 
> I took a copy of the code in this directory and fixed the following to read:
> 
> public FileMetadata retrieveMetadata(String key) throws IOException {
>     try {
>       S3Object object = s3Service.getObjectDetails(bucket, key);
>       return new FileMetadata(key, object.getContentLength(),
>           object.getLastModifiedDate().getTime());
>     } catch (S3ServiceException e) {
>       // Following is brittle. Is there a better way?
>       if (e.getResponseCode() == 404) {
>         return null;
>       }
>       if (e.getCause() instanceof IOException) {
>         throw (IOException) e.getCause();
>       }
>       throw new S3Exception(e);
>     }
>   }
> 
> which seems to fix the issue.  Am I missing something?  Also this seems to of 
> been broken for a variety of hadoop versions.  Does anyone actually use this 
> code path and if so is there a valid version combination that should of 
> worked for me?
> 
> Comments welcome.
> 
> Chris

Reply via email to