[
https://issues.apache.org/jira/browse/HDFS-13310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16513173#comment-16513173
]
Ewan Higgs commented on HDFS-13310:
-----------------------------------
Feedback from [~chris.douglas]:
PUT_FILE adds extra complication here. When writing a file, if a DN splits but
is still writing to the remote storage then it could interfere with another DN
that is tasked with writing the file. This should be solved by adding a
`complete` phase to the PUT_FILE. At this point, there's very little difference
between PUT_FILE and MULTIPART_PUT_PART. With this in mind, consider removing
PUT_PART.
> [PROVIDED Phase 2] The DatanodeProtocol should be have DNA_BACKUP to backup
> blocks
> ----------------------------------------------------------------------------------
>
> Key: HDFS-13310
> URL: https://issues.apache.org/jira/browse/HDFS-13310
> Project: Hadoop HDFS
> Issue Type: Sub-task
> Reporter: Ewan Higgs
> Assignee: Ewan Higgs
> Priority: Major
> Attachments: HDFS-13310-HDFS-12090.001.patch,
> HDFS-13310-HDFS-12090.002.patch
>
>
> As part of HDFS-12090, Datanodes should be able to receive DatanodeCommands
> in the heartbeat response that instructs it to backup a block.
> This should take the form of two sub commands: PUT_FILE (when the file is <=1
> block in size) and MULTIPART_PUT_PART when part of a Multipart Upload (see
> HDFS-13186).
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]