[ 
https://issues.apache.org/jira/browse/HDFS-13110?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16358419#comment-16358419
 ] 

Uma Maheswara Rao G edited comment on HDFS-13110 at 2/9/18 2:03 PM:
--------------------------------------------------------------------

Thank you [~rakeshr]

Please find my comments:

1.
{code:java}
+      String nextSPSPathId = impl.getNextSPSPathId();{code}
I think we can return path instead of string id? Because having long into 
string may be odd. Because mostly we have to use that as long again. How about 
we return string path itself. External scanner will use that path directly. 
Internal scanner can use getInode(src) to get Inode and do scanning. This will 
make things clean.

2. 
{code:java}
public class ItemInfo<T> {
  private T startId;
  private T fileId;
{code}
 We could rename StartID to spsPath and fileId to file ?

3. getNextSPSPathId() —> getNextSPSPath()

4. In SPS class:

    //TODO Add join here on SPS rpc server also

    Could you please remove this TODO? 

5. AttemptedItemInfo <T> itemInfo = iter.next();

    Please format this line

6. 
{code:java}
 /**
   * Gets the block collection id for which storage movements check necessary
   * and make the movement if required.
   *
   * @return block collection info
   */
  public synchronized ItemInfo<T> get() {
{code}
Please update here that it gets the file to satisfy storage.

 7.
{code:java}
// Some of the blocks are low redundant, so marking the status as
+    // FEW_LOW_REDUNDANCY_BLOCKS.
+    if (hasLowRedundancyBlocks) {
+      status = BlocksMovingAnalysis.Status.FEW_LOW_REDUNDANCY_BLOCKS;
+    }
{code}
I think if some blocks paired successfully, then we could mark few blocks 
faired. If no blocks paired successfully and block low redundancy then only 
state can be BlocksMovingAnalysis.Status.FEW_LOW_REDUNDANCY_BLOCKS


was (Author: umamaheswararao):
Thank you [~rakeshr]

Please find my comments:

1.
{code:java}
+      String nextSPSPathId = impl.getNextSPSPathId();{code}
I think we can return path instead of string id? Because having long into 
string may be odd. Because mostly we have to use that as long again. How about 
we return string path itself. External scanner will use that path directly. 
Internal scanner can use getInode(src) to get Inode and do scanning. This will 
make things clean.

 2. 

 
{code:java}
public class ItemInfo<T> {
  private T startId;
  private T fileId;
{code}
 

We could rename StartID to spsPath and fileId to file ?

 3. getNextSPSPathId() —> getNextSPSPath()

 4. In SPS class:

//TODO Add join here on SPS rpc server also

Could you please remove this TODO? 

5. AttemptedItemInfo <T> itemInfo = iter.next();

Please format this line

6. 

 
{code:java}
 /**
   * Gets the block collection id for which storage movements check necessary
   * and make the movement if required.
   *
   * @return block collection info
   */
  public synchronized ItemInfo<T> get() {
{code}
 

Please update here that it gets the file to satisfy storage.

 7.

 
{code:java}
// Some of the blocks are low redundant, so marking the status as
+    // FEW_LOW_REDUNDANCY_BLOCKS.
+    if (hasLowRedundancyBlocks) {
+      status = BlocksMovingAnalysis.Status.FEW_LOW_REDUNDANCY_BLOCKS;
+    }
{code}
 

I think if some blocks paired successfully, then we could mark few blocks 
faired. If no blocks paired successfully and block low redundancy then only 
state can be BlocksMovingAnalysis.Status.FEW_LOW_REDUNDANCY_BLOCKS

> [SPS]: Reduce the number of APIs in NamenodeProtocol used by external 
> satisfier
> -------------------------------------------------------------------------------
>
>                 Key: HDFS-13110
>                 URL: https://issues.apache.org/jira/browse/HDFS-13110
>             Project: Hadoop HDFS
>          Issue Type: Sub-task
>            Reporter: Rakesh R
>            Assignee: Rakesh R
>            Priority: Major
>         Attachments: HDFS-13110-HDFS-10285-00.patch, 
> HDFS-13110-HDFS-10285-01.patch, HDFS-13110-HDFS-10285-02.patch, 
> HDFS-13110-HDFS-10285-03.patch
>
>
> This task is to address the following [~daryn]'s comments. Please refer 
> HDFS-10285 to see more detailed discussion.
> *Comment-10)*
> {quote}
> NamenodeProtocolTranslatorPB
> Most of the api changes appear unnecessary.
> IntraSPSNameNodeContext#getFileInfo swallows all IOEs, based on assumption 
> that any and all IOEs means FNF which probably isn’t the intention during rpc 
> exceptions.
> {quote}
>  *Comment-13)*
> {quote}
> StoragePolicySatisfier
>  It appears to make back-to-back calls to hasLowRedundancyBlocks and 
> getFileInfo for every file. Haven’t fully groked the code, but if low 
> redundancy is not the common case, then it shouldn’t be called unless/until 
> needed. It looks like files that are under replicated are re-queued again?
> {quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to