[ 
https://issues.apache.org/jira/browse/HDFS-15461?focusedWorklogId=504178&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-504178
 ]

ASF GitHub Bot logged work on HDFS-15461:
-----------------------------------------

                Author: ASF GitHub Bot
            Created on: 23/Oct/20 14:05
            Start Date: 23/Oct/20 14:05
    Worklog Time Spent: 10m 
      Work Description: amahussein commented on pull request #2404:
URL: https://github.com/apache/hadoop/pull/2404#issuecomment-715363433


   Thanks @aajisaka for the review.
   Lets merge it then to 3.x if no one else has any objection.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
-------------------

    Worklog Id:     (was: 504178)
    Time Spent: 1h 10m  (was: 1h)

> TestDFSClientRetries#testGetFileChecksum fails intermittently
> -------------------------------------------------------------
>
>                 Key: HDFS-15461
>                 URL: https://issues.apache.org/jira/browse/HDFS-15461
>             Project: Hadoop HDFS
>          Issue Type: Sub-task
>            Reporter: Ahmed Hussein
>            Assignee: Ahmed Hussein
>            Priority: Major
>              Labels: pull-request-available, test
>          Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> {{TestDFSClientRetries.testGetFileChecksum}} fails intermittently on hadoop 
> trunk
> {code:bash}
> [INFO] Running org.apache.hadoop.hdfs.TestGetFileChecksum
> [ERROR] Tests run: 2, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 
> 10.491 s <<< FAILURE! - in org.apache.hadoop.hdfs.TestGetFileChecksum
> [ERROR] testGetFileChecksum(org.apache.hadoop.hdfs.TestGetFileChecksum)  Time 
> elapsed: 4.248 s  <<< ERROR!
> java.io.IOException: Failed to replace a bad datanode on the existing 
> pipeline due to no more good datanodes being available to try. (Nodes: 
> current=[DatanodeInfoWithStorage[127.0.0.1:52468,DS-e35b6720-8ac2-4e5e-98df-306985da6924,DISK],
>  
> DatanodeInfoWithStorage[127.0.0.1:52472,DS-91ec34d5-3f0a-494e-aed6-b01fa0131d8a,DISK]],
>  
> original=[DatanodeInfoWithStorage[127.0.0.1:52472,DS-91ec34d5-3f0a-494e-aed6-b01fa0131d8a,DISK],
>  
> DatanodeInfoWithStorage[127.0.0.1:52468,DS-e35b6720-8ac2-4e5e-98df-306985da6924,DISK]]).
>  The current failed datanode replacement policy is DEFAULT, and a client may 
> configure this via 
> 'dfs.client.block.write.replace-datanode-on-failure.policy' in its 
> configuration.
>       at 
> org.apache.hadoop.hdfs.DataStreamer.findNewDatanode(DataStreamer.java:1304)
>       at 
> org.apache.hadoop.hdfs.DataStreamer.addDatanode2ExistingPipeline(DataStreamer.java:1372)
>       at 
> org.apache.hadoop.hdfs.DataStreamer.handleDatanodeReplacement(DataStreamer.java:1598)
>       at 
> org.apache.hadoop.hdfs.DataStreamer.setupPipelineInternal(DataStreamer.java:1499)
>       at 
> org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1481)
>       at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:719)
> [INFO]
> [INFO] Results:
> [INFO]
> [ERROR] Errors:
> [ERROR]   TestGetFileChecksum.testGetFileChecksum ยป IO Failed to replace a 
> bad datanode ...
> [INFO]
> [ERROR] Tests run: 2, Failures: 0, Errors: 1, Skipped: 0
> [INFO]
> [ERROR] There are test failures.
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to