[ 
https://issues.apache.org/jira/browse/HDFS-16102?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lei w updated HDFS-16102:
-------------------------
    Description: 
The current logic in removeBlocksAssociatedTo(...) is as follows:
{code:java}
  void removeBlocksAssociatedTo(final DatanodeDescriptor node) {
    providedStorageMap.removeDatanode(node);
    for (DatanodeStorageInfo storage : node.getStorageInfos()) {
      final Iterator<BlockInfo> it = storage.getBlockIterator();
      //add the BlockInfos to a new collection as the
      //returned iterator is not modifiable.
      Collection<BlockInfo> toRemove = new ArrayList<>();
      while (it.hasNext()) {
        toRemove.add(it.next()); // First iteration : to put blocks to another 
collection 
      }

      for (BlockInfo b : toRemove) {
        removeStoredBlock(b, node); // Another iteration : to remove blocks
      }
    }
  // ......
  }
{code}
 In fact , we can use the first iteration to achieve this logic , so should we 
remove the redundant iteration to save time and memory?

  was:
The current logic in removeBlocksAssociatedTo(...) is as follows:
{code:java}
  void removeBlocksAssociatedTo(final DatanodeDescriptor node) {
    providedStorageMap.removeDatanode(node);
    for (DatanodeStorageInfo storage : node.getStorageInfos()) {
      final Iterator<BlockInfo> it = storage.getBlockIterator();
      //add the BlockInfos to a new collection as the
      //returned iterator is not modifiable.
      Collection<BlockInfo> toRemove = new ArrayList<>();
      while (it.hasNext()) {
        toRemove.add(it.next()); // First iteration : to put blocks to another 
collection 
      }

      for (BlockInfo b : toRemove) {
        removeStoredBlock(b, node); // Another iteration : to remove blocks
      }
    }
  // ......
  }
{code}
 In fact , we can use the first iteration to achieve this logic , so should we 
remove the redundant iteration to save time?


> Remove redundant iteration in BlockManager#removeBlocksAssociatedTo(...) to 
> save time 
> --------------------------------------------------------------------------------------
>
>                 Key: HDFS-16102
>                 URL: https://issues.apache.org/jira/browse/HDFS-16102
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: namenode
>            Reporter: lei w
>            Assignee: lei w
>            Priority: Minor
>         Attachments: HDFS-16102.001.patch
>
>
> The current logic in removeBlocksAssociatedTo(...) is as follows:
> {code:java}
>   void removeBlocksAssociatedTo(final DatanodeDescriptor node) {
>     providedStorageMap.removeDatanode(node);
>     for (DatanodeStorageInfo storage : node.getStorageInfos()) {
>       final Iterator<BlockInfo> it = storage.getBlockIterator();
>       //add the BlockInfos to a new collection as the
>       //returned iterator is not modifiable.
>       Collection<BlockInfo> toRemove = new ArrayList<>();
>       while (it.hasNext()) {
>         toRemove.add(it.next()); // First iteration : to put blocks to 
> another collection 
>       }
>       for (BlockInfo b : toRemove) {
>         removeStoredBlock(b, node); // Another iteration : to remove blocks
>       }
>     }
>   // ......
>   }
> {code}
>  In fact , we can use the first iteration to achieve this logic , so should 
> we remove the redundant iteration to save time and memory?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to