Rakesh R commented on HDFS-12090:

Keep in mind that when we perform a multipart-multinode upload, the multipart 
init and complete also need to be ordered. But I think we can do them from the 
Since {{internal SPS}} is tracking file-blocks movement at namenode, the 
multipart init and complete logic to be included at namenode side. Can we use 
 and implement *init* logic in {{IntraSPSNameNodeBlockMoveTaskHandler}} class, 
which is meant for internal sps only. Maybe, we could change the method 
signature to pass array of {{blockMovingInfos}}. IIUC, *complete* call is 
invoked once all the blocks for a file is satisfied. If yes, we could provide 
an new hook for the file which is {{BLOCKS_ALREADY_SATISFIED}}, 

> Handling writes from HDFS to Provided storages
> ----------------------------------------------
>                 Key: HDFS-12090
>                 URL: https://issues.apache.org/jira/browse/HDFS-12090
>             Project: Hadoop HDFS
>          Issue Type: New Feature
>            Reporter: Virajith Jalaparti
>            Priority: Major
>         Attachments: HDFS-12090-Functional-Specification.001.pdf, 
> HDFS-12090-Functional-Specification.002.pdf, 
> HDFS-12090-Functional-Specification.003.pdf, HDFS-12090-design.001.pdf, 
> HDFS-12090.0000.patch, HDFS-12090.0001.patch
> HDFS-9806 introduces the concept of {{PROVIDED}} storage, which makes data in 
> external storage systems accessible through HDFS. However, HDFS-9806 is 
> limited to data being read through HDFS. This JIRA will deal with how data 
> can be written to such {{PROVIDED}} storages from HDFS.

This message was sent by Atlassian JIRA

To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to