[
https://issues.apache.org/jira/browse/HDFS-3067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13410161#comment-13410161
]
Hudson commented on HDFS-3067:
------------------------------
Integrated in Hadoop-Common-trunk-Commit #2436 (See
[https://builds.apache.org/job/Hadoop-Common-trunk-Commit/2436/])
Move CHANGES.txt entry for HDFS-3067 to branch-2 instead of trunk.
(Revision 1359331)
Result = SUCCESS
atm : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1359331
Files :
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
> NPE in DFSInputStream.readBuffer if read is repeated on corrupted block
> -----------------------------------------------------------------------
>
> Key: HDFS-3067
> URL: https://issues.apache.org/jira/browse/HDFS-3067
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: hdfs client
> Affects Versions: 0.24.0
> Reporter: Henry Robinson
> Assignee: Henry Robinson
> Fix For: 2.0.1-alpha
>
> Attachments: HDFS-3067.1.patch, HDFS-3607.patch
>
>
> With a singly-replicated block that's corrupted, issuing a read against it
> twice in succession (e.g. if ChecksumException is caught by the client) gives
> a NullPointerException.
> Here's the body of a test that reproduces the problem:
> {code}
> final short REPL_FACTOR = 1;
> final long FILE_LENGTH = 512L;
> cluster.waitActive();
> FileSystem fs = cluster.getFileSystem();
> Path path = new Path("/corrupted");
> DFSTestUtil.createFile(fs, path, FILE_LENGTH, REPL_FACTOR, 12345L);
> DFSTestUtil.waitReplication(fs, path, REPL_FACTOR);
> ExtendedBlock block = DFSTestUtil.getFirstBlock(fs, path);
> int blockFilesCorrupted = cluster.corruptBlockOnDataNodes(block);
> assertEquals("All replicas not corrupted", REPL_FACTOR,
> blockFilesCorrupted);
> InetSocketAddress nnAddr =
> new InetSocketAddress("localhost", cluster.getNameNodePort());
> DFSClient client = new DFSClient(nnAddr, conf);
> DFSInputStream dis = client.open(path.toString());
> byte[] arr = new byte[(int)FILE_LENGTH];
> boolean sawException = false;
> try {
> dis.read(arr, 0, (int)FILE_LENGTH);
> } catch (ChecksumException ex) {
> sawException = true;
> }
>
> assertTrue(sawException);
> sawException = false;
> try {
> dis.read(arr, 0, (int)FILE_LENGTH); // <-- NPE thrown here
> } catch (ChecksumException ex) {
> sawException = true;
> }
> {code}
> The stack:
> {code}
> java.lang.NullPointerException
> at
> org.apache.hadoop.hdfs.DFSInputStream.readBuffer(DFSInputStream.java:492)
> at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:545)
> [snip test stack]
> {code}
> and the problem is that currentNode is null. It's left at null after the
> first read, which fails, and then is never refreshed because the condition in
> read that protects blockSeekTo is only triggered if the current position is
> outside the block's range.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators:
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira