[ 
https://issues.apache.org/jira/browse/HDFS-10800?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uma Maheswara Rao G updated HDFS-10800:
---------------------------------------
    Attachment: HDFS-10800-HDFS-10285-05.patch

A minor update in patch. Forgot to move blockMovingInfos inside 
computeAndAssign* API. Please check this patch for review.

> [SPS]: Daemon thread in Namenode to find blocks placed in other storage than 
> what the policy specifies
> ------------------------------------------------------------------------------------------------------
>
>                 Key: HDFS-10800
>                 URL: https://issues.apache.org/jira/browse/HDFS-10800
>             Project: Hadoop HDFS
>          Issue Type: Sub-task
>          Components: namenode
>    Affects Versions: HDFS-10285
>            Reporter: Uma Maheswara Rao G
>            Assignee: Uma Maheswara Rao G
>         Attachments: HDFS-10800-HDFS-10285-00.patch, 
> HDFS-10800-HDFS-10285-01.patch, HDFS-10800-HDFS-10285-02.patch, 
> HDFS-10800-HDFS-10285-03.patch, HDFS-10800-HDFS-10285-04.patch, 
> HDFS-10800-HDFS-10285-05.patch
>
>
> This JIRA is for implementing a daemon thread called StoragePolicySatisfier 
> in namatode, which scans the asked files blocks which are placed in different 
> storages in DNs than the related policies specifie. 
>  The idea is:
>       # When user called on some files/dirs to satisfy storage policy, they 
> should have been tracked in NN and then StoragePolicySatisfier thread will 
> pick one by one file,  then check the blocks which might have been placed in 
> different storage in DN than what the storage policy is expecting it to.
>       # After checking all, it should also construct the data structures with 
> the required information to move a block from one storage to another.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to