[ 
https://issues.apache.org/jira/browse/HDFS-13310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16419882#comment-16419882
 ] 

Daryn Sharp commented on HDFS-13310:
------------------------------------

Is there any way to generalize this feature?  Scanning the patch, it looks like 
a leaky abstraction.  I don't understand why the DN needs all kinds of new 
commands (here and other jiras) that are equivalent to "copy or move this 
block".  If you want to do multi-part upload to s3 magic, that should be hidden 
behind the "provided" plugin when a block is copied/moved to it.  Not leaked 
all throughout hdfs.

> [PROVIDED Phase 2] The DatanodeProtocol should be have DNA_BACKUP to backup 
> blocks
> ----------------------------------------------------------------------------------
>
>                 Key: HDFS-13310
>                 URL: https://issues.apache.org/jira/browse/HDFS-13310
>             Project: Hadoop HDFS
>          Issue Type: Sub-task
>            Reporter: Ewan Higgs
>            Assignee: Ewan Higgs
>            Priority: Major
>         Attachments: HDFS-13310-HDFS-12090.001.patch, 
> HDFS-13310-HDFS-12090.002.patch
>
>
> As part of HDFS-12090, Datanodes should be able to receive DatanodeCommands 
> in the heartbeat response that instructs it to backup a block.
> This should take the form of two sub commands: PUT_FILE (when the file is <=1 
> block in size) and MULTIPART_PUT_PART when part of a Multipart Upload (see 
> HDFS-13186).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to