[
https://issues.apache.org/jira/browse/HDFS-10285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16368648#comment-16368648
]
Rakesh R edited comment on HDFS-10285 at 2/18/18 7:37 PM:
----------------------------------------------------------
We have worked on the comments and following is quick update.
{{Comment-1) }} => DONE via HDFS-13097
{{Comment-2) }} => DONE via HDFS-13097
{{Comment-5)}} => DONE via HDFS-13097
{{Comment-8) }} => DONE via HDFS-13097
{{Comment-10)}} => DONE via HDFS-13110
{{Comment-11)}} => DONE via HDFS-13097
{{Comment-12)}} => DONE via HDFS-13097
{{Comment-13)}} => DONE via HDFS-13110
{{Comment-15)}} => DONE via HDFS-13097
{{Comment-16)}} => DONE via HDFS-13097
{{Comment-18)}} => DONE via HDFS-13097
{{Comment-19)}} => DONE via HDFS-13097
{{Comment-22)}} => DONE via HDFS-13097
*For the below comments*, it would be great to hear your thoughts. Please let
me know your feedback on my reply.
{{Comment-3)}} => This comment has two parts ibr and data transfer. IBR will
be explored and implemented via HDFS-13165 sub-task. But, data transfer part is
not concluded yet. How do we incorporate local move into this, currently data
transfer is not having such logic. IIUC, DNA_TRANSFER is used to send a copy of
a block to another datanode. Also, mover tool uses replaceBlock() for block
movement, which already has block movement to a different storage within the
same datanode. How abt using {{replaceBlock}} pattern here in sps as well?
{{Comment-4)}} => Depends on comment-3
{{Comment-6, Comment-9, Comment-14, Comment-17)}} => Needs to understand more
on this.
{{Comment-20}} => Depends on comment-3
*In Progress tasks:*
{{Comment-3)}} => HDFS-13165, this jira will only implement logic to collects
back the moved block via IBR.
{{Comment-21)}} => HDFS-13165
{{Comment-7)}} => HDFS-13166
was (Author: rakeshr):
We have worked on the comments and following is quick update.
{{Comment-1)}} => DONE via HDFS-1309
{{Comment-2)}} => DONE via HDFS-13097
{{Comment-5)}} => DONE via HDFS-13097
{{Comment-8)}} => DONE via HDFS-13097
{{Comment-10)}} => DONE via HDFS-13110
{{Comment-11)}} => DONE via HDFS-13097
{{Comment-12)}} => DONE via HDFS-13097
{{Comment-13)}} => DONE via HDFS-13110
{{Comment-15)}} => DONE via HDFS-13097
{{Comment-16)}} => DONE via HDFS-13097
{{Comment-18)}} => DONE via HDFS-13097
{{Comment-19)}} => DONE via HDFS-13097
{{Comment-22)}} => DONE via HDFS-13097
*For the below comments*, it would be great to hear your thoughts. Please let
me know your feedback on my reply.
{{Comment-3)}} => This comment has two parts ibr and data transfer. IBR will
be explored and implemented via HDFS-13165 sub-task. But, data transfer part is
not concluded yet. How do we incorporate local move into this, currently data
transfer is not having such logic. IIUC, DNA_TRANSFER is used to send a copy of
a block to another datanode. Also, mover tool uses replaceBlock() for block
movement, which already has block movement to a different storage within the
same datanode. How abt using \{{replaceBlock}} pattern here in sps as well?
{{Comment-4}} => Depends on comment-3
{{Comment-6, Comment-9, Comment-14, Comment-17}} => Needs to understand more
on this.
{{Comment-20}} => Depends on comment-3
*In Progress tasks:*
{{Comment-3)}} => HDFS-13165, this jira will only implement logic to collects
back the moved block via IBR.
{{Comment-21)}} => HDFS-13165
{{Comment-7)}} => HDFS-13166
> Storage Policy Satisfier in Namenode
> ------------------------------------
>
> Key: HDFS-10285
> URL: https://issues.apache.org/jira/browse/HDFS-10285
> Project: Hadoop HDFS
> Issue Type: New Feature
> Components: datanode, namenode
> Affects Versions: HDFS-10285
> Reporter: Uma Maheswara Rao G
> Assignee: Uma Maheswara Rao G
> Priority: Major
> Attachments: HDFS-10285-consolidated-merge-patch-00.patch,
> HDFS-10285-consolidated-merge-patch-01.patch,
> HDFS-10285-consolidated-merge-patch-02.patch,
> HDFS-10285-consolidated-merge-patch-03.patch,
> HDFS-10285-consolidated-merge-patch-04.patch,
> HDFS-10285-consolidated-merge-patch-05.patch,
> HDFS-SPS-TestReport-20170708.pdf, SPS Modularization.pdf,
> Storage-Policy-Satisfier-in-HDFS-June-20-2017.pdf,
> Storage-Policy-Satisfier-in-HDFS-May10.pdf,
> Storage-Policy-Satisfier-in-HDFS-Oct-26-2017.pdf
>
>
> Heterogeneous storage in HDFS introduced the concept of storage policy. These
> policies can be set on directory/file to specify the user preference, where
> to store the physical block. When user set the storage policy before writing
> data, then the blocks could take advantage of storage policy preferences and
> stores physical block accordingly.
> If user set the storage policy after writing and completing the file, then
> the blocks would have been written with default storage policy (nothing but
> DISK). User has to run the ‘Mover tool’ explicitly by specifying all such
> file names as a list. In some distributed system scenarios (ex: HBase) it
> would be difficult to collect all the files and run the tool as different
> nodes can write files separately and file can have different paths.
> Another scenarios is, when user rename the files from one effected storage
> policy file (inherited policy from parent directory) to another storage
> policy effected directory, it will not copy inherited storage policy from
> source. So it will take effect from destination file/dir parent storage
> policy. This rename operation is just a metadata change in Namenode. The
> physical blocks still remain with source storage policy.
> So, Tracking all such business logic based file names could be difficult for
> admins from distributed nodes(ex: region servers) and running the Mover tool.
> Here the proposal is to provide an API from Namenode itself for trigger the
> storage policy satisfaction. A Daemon thread inside Namenode should track
> such calls and process to DN as movement commands.
> Will post the detailed design thoughts document soon.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]