[ 
https://issues.apache.org/jira/browse/HDFS-12386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rushabh S Shah updated HDFS-12386:
----------------------------------
    Attachment: HDFS-12386-1.patch

1.
bq. What happens when a new WebHdfsFileSystem instance tries to talk to an 
older namenode?
Good catch. Added the logic to handle the case when new client talking to old 
namenode and added a test case for that too.

2. Fixed checkstyle warnings except following one.
{noformat}
./hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/web/resources/NamenodeWebHdfsMethods.java:1141:
    case GETSERVERDEFAULTS: {:29: Avoid nested blocks. [AvoidNestedBlocks]
{noformat}
Just followed the pattern among the other switch case statement.

3. Regarding javac warning.
{noformat}
[WARNING] 
/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java:[1825,26]
 [deprecation] getServerDefaults() in FileSystem has been deprecated
{noformat}
DistributedFileSystem also overrides {{getServerDefaults()}} so kept it as it 
is.

4. Regarding test failures.
All the erasure coding related test failures are fairly consistent. They are  
failing in almost all the builds.
Following are the test cases other than EC related ones.
TestDirectoryScanner, TestJournalNodeSync, TestClientProtocolForPipelineRecovery
{noformat}
Running org.apache.hadoop.hdfs.qjournal.server.TestJournalNodeSync
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 54.89 sec - in 
org.apache.hadoop.hdfs.qjournal.server.TestJournalNodeSync
Running org.apache.hadoop.hdfs.server.datanode.TestDirectoryScanner
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 151.758 sec - 
in org.apache.hadoop.hdfs.server.datanode.TestDirectoryScanner
Running org.apache.hadoop.hdfs.TestClientProtocolForPipelineRecovery
Tests run: 11, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 78.374 sec <<< 
FAILURE! - in org.apache.hadoop.hdfs.TestClientProtocolForPipelineRecovery
testZeroByteBlockRecovery(org.apache.hadoop.hdfs.TestClientProtocolForPipelineRecovery)
  Time elapsed: 12.584 sec  <<< ERROR!
java.io.IOException: Failed to replace a bad datanode on the existing pipeline 
due to no more good datanodes being available to try. (Nodes: 
current=[DatanodeInfoWithStorage[127.0.0.1:56534,DS-b0ecc785-b07e-4f09-8aac-62eb31911401,DISK]],
 
original=[DatanodeInfoWithStorage[127.0.0.1:56534,DS-b0ecc785-b07e-4f09-8aac-62eb31911401,DISK]]).
 The current failed datanode replacement policy is ALWAYS, and a client may 
configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' 
in its configuration.
        at 
org.apache.hadoop.hdfs.DataStreamer.findNewDatanode(DataStreamer.java:1317)
        at 
org.apache.hadoop.hdfs.DataStreamer.addDatanode2ExistingPipeline(DataStreamer.java:1387)
        at 
org.apache.hadoop.hdfs.DataStreamer.handleDatanodeReplacement(DataStreamer.java:1586)
        at 
org.apache.hadoop.hdfs.DataStreamer.setupPipelineInternal(DataStreamer.java:1487)
        at 
org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1469)
        at 
org.apache.hadoop.hdfs.DataStreamer.processDatanodeOrExternalError(DataStreamer.java:1273)
        at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:684)

{noformat}
TestClientProtocolForPipelineRecovery#testZeroByteBlockRecovery fails even 
without my patch also.
So all the test failures are unrelated.

> Add fsserver defaults call to WebhdfsFileSystem.
> ------------------------------------------------
>
>                 Key: HDFS-12386
>                 URL: https://issues.apache.org/jira/browse/HDFS-12386
>             Project: Hadoop HDFS
>          Issue Type: Sub-task
>          Components: webhdfs
>            Reporter: Rushabh S Shah
>            Assignee: Rushabh S Shah
>            Priority: Minor
>         Attachments: HDFS-12386-1.patch, HDFS-12386.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to