[
https://issues.apache.org/jira/browse/HDFS-9290?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14971896#comment-14971896
]
Kihwal Lee commented on HDFS-9290:
----------------------------------
Since no test was run for {{hdoop-hdfs-client}}, I ran the tests manually. No
need to run server-side tests, since this is client-only change.
{panel}
-------------------------------------------------------
T E S T S
-------------------------------------------------------
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m;
support was removed in 8.0
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractAppend
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.864 sec - in
org.apache.hadoop.fs.contract.hdfs.TestHDFSContractAppend
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m;
support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.ha.TestHAAppend
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.835 sec - in
org.apache.hadoop.hdfs.server.namenode.ha.TestHAAppend
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m;
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestFileAppend3
Tests run: 15, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 48.2 sec - in
org.apache.hadoop.hdfs.TestFileAppend3
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m;
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestFileAppendRestart
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.929 sec - in
org.apache.hadoop.hdfs.TestFileAppendRestart
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m;
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestFileAppend2
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 34.247 sec - in
org.apache.hadoop.hdfs.TestFileAppend2
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m;
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestFileAppend
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 32.374 sec -
in org.apache.hadoop.hdfs.TestFileAppend
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m;
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestAppendSnapshotTruncate
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 28.968 sec - in
org.apache.hadoop.hdfs.TestAppendSnapshotTruncate
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m;
support was removed in 8.0
Running org.apache.hadoop.hdfs.FileAppendTest4
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 77.958 sec - in
org.apache.hadoop.hdfs.FileAppendTest4
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m;
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestFileAppend4
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 37.462 sec - in
org.apache.hadoop.hdfs.TestFileAppend4
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m;
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestAppendDifferentChecksum
Tests run: 3, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 9.81 sec - in
org.apache.hadoop.hdfs.TestAppendDifferentChecksum
Results :
Tests run: 48, Failures: 0, Errors: 0, Skipped: 1
{panel}
> DFSClient#callAppend() is not backward compatible for slightly older NameNodes
> ------------------------------------------------------------------------------
>
> Key: HDFS-9290
> URL: https://issues.apache.org/jira/browse/HDFS-9290
> Project: Hadoop HDFS
> Issue Type: Bug
> Affects Versions: 2.7.1
> Reporter: Tony Wu
> Assignee: Tony Wu
> Priority: Blocker
> Attachments: HDFS-9290.001.patch, HDFS-9290.002.patch
>
>
> HDFS-7210 combined 2 RPC calls used at file append into a single one.
> Specifically {{getFileInfo()}} is combined with {{append()}}. While backward
> compatibility for older client is handled by the new NameNode (protobuf).
> Newer client's {{append()}} call does not work with older NameNodes. One will
> run into an exception like the following:
> {code:java}
> java.lang.NullPointerException
> at
> org.apache.hadoop.hdfs.DFSOutputStream.isLazyPersist(DFSOutputStream.java:1741)
> at
> org.apache.hadoop.hdfs.DFSOutputStream.getChecksum4Compute(DFSOutputStream.java:1550)
> at
> org.apache.hadoop.hdfs.DFSOutputStream.<init>(DFSOutputStream.java:1560)
> at
> org.apache.hadoop.hdfs.DFSOutputStream.<init>(DFSOutputStream.java:1670)
> at
> org.apache.hadoop.hdfs.DFSOutputStream.newStreamForAppend(DFSOutputStream.java:1717)
> at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1861)
> at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1922)
> at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1892)
> at
> org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:340)
> at
> org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:336)
> at
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> at
> org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:336)
> at
> org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:318)
> at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1164)
> {code}
> The cause is that the new client code is expecting both the last block and
> file info in the same RPC but the old NameNode only replied with the first.
> The exception itself does not reflect this and one will have to look at the
> HDFS source code to really understand what happened.
> We can have the client detect it's talking to a old NameNode and send an
> extra {{getFileInfo()}} RPC. Or we should improve the exception being thrown
> to accurately reflect the cause of failure.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)