[
https://issues.apache.org/jira/browse/HADOOP-930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12599093#action_12599093
]
Owen O'Malley commented on HADOOP-930:
--------------------------------------
Can someone validate that this code works for them?
> Add support for reading regular (non-block-based) files from S3 in
> S3FileSystem
> -------------------------------------------------------------------------------
>
> Key: HADOOP-930
> URL: https://issues.apache.org/jira/browse/HADOOP-930
> Project: Hadoop Core
> Issue Type: New Feature
> Components: fs
> Affects Versions: 0.10.1
> Reporter: Tom White
> Assignee: Tom White
> Fix For: 0.18.0
>
> Attachments: hadoop-930-v2.patch, hadoop-930-v3.patch,
> hadoop-930.patch, jets3t-0.6.0.jar
>
>
> People often have input data on S3 that they want to use for a Map Reduce job
> and the current S3FileSystem implementation cannot read it since it assumes a
> block-based format.
> We would add the following metadata to files written by S3FileSystem: an
> indication that it is block oriented ("S3FileSystem.type=block") and a
> filesystem version number ("S3FileSystem.version=1.0"). Regular S3 files
> would not have the type metadata so S3FileSystem would not try to interpret
> them as inodes.
> An extension to write regular files to S3 would not be covered by this change
> - we could do this as a separate piece of work (we still need to decide
> whether to introduce another scheme - e.g. rename block-based S3 to "s3fs"
> and call regular S3 "s3" - or whether to just use a configuration property to
> control block-based vs. regular writes).
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.