[
https://issues.apache.org/jira/browse/HADOOP-5459?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12680851#action_12680851
]
Hadoop QA commented on HADOOP-5459:
-----------------------------------
+1 overall. Here are the results of testing the latest attachment
http://issues.apache.org/jira/secure/attachment/12401904/5459-1.patch
against trunk revision 752405.
+1 @author. The patch does not contain any @author tags.
+1 tests included. The patch appears to include 2 new or modified tests.
+1 javadoc. The javadoc tool did not generate any warning messages.
+1 javac. The applied patch does not increase the total number of javac
compiler warnings.
+1 findbugs. The patch does not introduce any new Findbugs warnings.
+1 Eclipse classpath. The patch retains Eclipse classpath integrity.
+1 release audit. The applied patch does not increase the total number of
release audit warnings.
+1 core tests. The patch passed core unit tests.
+1 contrib tests. The patch passed contrib unit tests.
Test results:
http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-minerva.apache.org/49/testReport/
Findbugs warnings:
http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-minerva.apache.org/49/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
Checkstyle results:
http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-minerva.apache.org/49/artifact/trunk/build/test/checkstyle-errors.html
Console output:
http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-minerva.apache.org/49/console
This message is automatically generated.
> CRC errors not detected reading intermediate output into memory with
> problematic length
> ---------------------------------------------------------------------------------------
>
> Key: HADOOP-5459
> URL: https://issues.apache.org/jira/browse/HADOOP-5459
> Project: Hadoop Core
> Issue Type: Bug
> Affects Versions: 0.20.0
> Reporter: Chris Douglas
> Assignee: Chris Douglas
> Priority: Blocker
> Attachments: 5459-0.patch, 5459-1.patch
>
>
> It's possible that the expected, uncompressed length of the segment is less
> than the available/decompressed data. This can happen in some worst-cases for
> compression, but it is exceedingly rare. It is also possible (though also
> fantastically unlikely) for the data to deflate to a size greater than that
> reported by the map. CRC errors will remain undetected because
> IFileInputStream does not validate the checksum until the end of the stream,
> and close() does not advance the stream to the end of the segment. The
> (abbreviated) read loop fetching data in shuffleInMemory:
> {code}
> int n = input.read(shuffleData, 0, shuffleData.length);
> while (n > 0) {
> bytesRead += n;
> n = input.read(shuffleData, bytesRead,
> (shuffleData.length-bytesRead));
> }
> {code}
> Will read only up to the expected length. Without reading the whole segment,
> the checksum is not validated. Even if IFileInputStream instances are closed,
> they should always validate checksums.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.