[
https://issues.apache.org/jira/browse/HADOOP-11584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14330199#comment-14330199
]
Hudson commented on HADOOP-11584:
---------------------------------
FAILURE: Integrated in Hadoop-trunk-Commit #7173 (See
[https://builds.apache.org/job/Hadoop-trunk-Commit/7173/])
HADOOP-11584 s3a file block size set to 0 in getFileStatus. (Brahma Reddy
Battula via stevel) (stevel: rev 709ff99cff4124823bde631e272af7be9a22f83b)
*
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/TestS3ABlocksize.java
*
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileStatus.java
*
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
* hadoop-tools/hadoop-aws/src/test/resources/log4j.properties
* hadoop-common-project/hadoop-common/CHANGES.txt
> s3a file block size set to 0 in getFileStatus
> ---------------------------------------------
>
> Key: HADOOP-11584
> URL: https://issues.apache.org/jira/browse/HADOOP-11584
> Project: Hadoop Common
> Issue Type: Sub-task
> Components: fs/s3
> Affects Versions: 2.6.0
> Reporter: Dan Hecht
> Assignee: Brahma Reddy Battula
> Priority: Blocker
> Fix For: 2.7.0
>
> Attachments: HADOOP-10584-003.patch, HADOOP-111584.patch,
> HADOOP-11584-002.patch
>
>
> The consequence is that mapreduce probably is not splitting s3a files in the
> expected way. This is similar to HADOOP-5861 (which was for s3n, though s3n
> was passing 5G rather than 0 for block size).
> FileInputFormat.getSplits() relies on the FileStatus block size being set:
> {code}
> if (isSplitable(job, path)) {
> long blockSize = file.getBlockSize();
> long splitSize = computeSplitSize(blockSize, minSize, maxSize);
> {code}
> However, S3AFileSystem does not set the FileStatus block size field. From
> S3AFileStatus.java:
> {code}
> // Files
> public S3AFileStatus(long length, long modification_time, Path path) {
> super(length, false, 1, 0, modification_time, path);
> isEmptyDirectory = false;
> }
> {code}
> I think it should use S3AFileSystem.getDefaultBlockSize() for each file's
> block size (where it's currently passing 0).
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)