[ 
https://issues.apache.org/jira/browse/HDFS-8493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14587918#comment-14587918
 ] 

Rakesh R commented on HDFS-8493:
--------------------------------

Thank you [~vinayrpet] for reviews.

bq. Since all truncate() impls are moved to this class, you might need to 
update the comment. It would be better if you give the link to referring method.
I will update the comment like blow, hope that would be OK.
{code}
   * Unprotected truncate implementation. Unlike
   * {@link FSDirTruncateOp#truncate}, this will not schedule block recovery.
   *
{code}

bq. Can you also verify the test failure?
IIUC following is the failure reason when running the tests previously. This is 
passing in my env, also the latest build shows the test is passing. It looks 
like this failure is not related to my patch. One more observation in this test 
class is, {{TestFileTruncate#setup}} will not be able to work well if the 
testcase with {{snapshot}} fails. I meant, {{fs.delete(parent, true);}} will 
throw exception saying {{The directory /test cannot be deleted since /test is 
snapshottable and already has snapshots}}. Many tests in this class failed due 
to the same reason.
{code}
2015-06-15 15:16:07,291 WARN  blockmanagement.BlockPlacementPolicy 
(BlockPlacementPolicyDefault.java:chooseTarget(349)) - Failed to place enough 
replicas, still in need of 1 to reach 3 (unavailableStorages=[], 
storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], 
creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=false) For more 
information, please enable DEBUG log level on 
org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy
2015-06-15 15:16:07,292 WARN  blockmanagement.BlockPlacementPolicy 
(BlockPlacementPolicyDefault.java:chooseTarget(349)) - Failed to place enough 
replicas, still in need of 1 to reach 3 (unavailableStorages=[DISK], 
storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], 
creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=false) For more 
information, please enable DEBUG log level on 
org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy
2015-06-15 15:16:07,292 WARN  protocol.BlockStoragePolicy 
(BlockStoragePolicy.java:chooseStorageTypes(161)) - Failed to place enough 
replicas: expected size is 1 but only 0 storage types can be selected 
(replication=3, selected=[], unavailable=[DISK, ARCHIVE], removed=[DISK], 
policy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], 
replicationFallbacks=[ARCHIVE]})
2015-06-15 15:16:07,292 WARN  blockmanagement.BlockPlacementPolicy 
(BlockPlacementPolicyDefault.java:chooseTarget(349)) - Failed to place enough 
replicas, still in need of 1 to reach 3 (unavailableStorages=[DISK, ARCHIVE], 
storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], 
creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=false) All 
required storage types are unavailable:  unavailableStorages=[DISK, ARCHIVE], 
storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], 
creationFallbacks=[], replicationFallbacks=[ARCHIVE]}
2015-06-15 15:16:07,293 WARN  hdfs.DataStreamer (DataStreamer.java:run(695)) - 
DataStreamer Exception
java.io.IOException: Failed to replace a bad datanode on the existing pipeline 
due to no more good datanodes being available to try. (Nodes: 
current=[DatanodeInfoWithStorage[127.0.0.1:47991,DS-0f451b86-376d-4589-9d49-df1387e1dd72,DISK],
 
DatanodeInfoWithStorage[127.0.0.1:44802,DS-31bf3b2f-cfba-46b6-9e90-868dc0a9d444,DISK]],
 
original=[DatanodeInfoWithStorage[127.0.0.1:44802,DS-31bf3b2f-cfba-46b6-9e90-868dc0a9d444,DISK],
 
DatanodeInfoWithStorage[127.0.0.1:47991,DS-0f451b86-376d-4589-9d49-df1387e1dd72,DISK]]).
 The current failed datanode replacement policy is DEFAULT, and a client may 
configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' 
in its configuration.
        at 
org.apache.hadoop.hdfs.DataStreamer.findNewDatanode(DataStreamer.java:1145)
        at 
org.apache.hadoop.hdfs.DataStreamer.addDatanode2ExistingPipeline(DataStreamer.java:1211)
        at 
org.apache.hadoop.hdfs.DataStreamer.handleDatanodeReplacement(DataStreamer.java:1375)
        at 
org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1289)
        at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:586)
{code}

> Consolidate truncate() related implementation in a single class
> ---------------------------------------------------------------
>
>                 Key: HDFS-8493
>                 URL: https://issues.apache.org/jira/browse/HDFS-8493
>             Project: Hadoop HDFS
>          Issue Type: Sub-task
>            Reporter: Haohui Mai
>            Assignee: Rakesh R
>         Attachments: HDFS-8493-001.patch, HDFS-8493-002.patch, 
> HDFS-8493-003.patch, HDFS-8493-004.patch, HDFS-8493-005.patch, 
> HDFS-8493-006.patch
>
>
> This jira proposes to consolidate truncate() related methods into a single 
> class.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to