[ 
https://issues.apache.org/jira/browse/HADOOP-5015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12706649#action_12706649
 ] 

Raghu Angadi commented on HADOOP-5015:
--------------------------------------

> To make code review simpler, I have also retained the structure of the code 
> moved from FSNamesystem.java as it is in BlockManager.java.

There are a lot of formatting changes the break existing patches HDFS : 
e.g. segment (Ideally it should have been a clean cut-n-paste) :

{noformat}
     synchronized (neededReplications) {
-      out.println("Metasave: Blocks waiting for replication: " +
-                  neededReplications.size());
+      out.println("Metasave: Blocks waiting for replication: "
+          + neededReplications.size());
       for (Block block : neededReplications) {
-        List<DatanodeDescriptor> containingNodes =
-                                          new ArrayList<DatanodeDescriptor>();
+        List<DatanodeDescriptor> containingNodes = new 
ArrayList<DatanodeDescriptor>();
         NumberReplicas numReplicas = new NumberReplicas();
         // source node returned is not used
         chooseSourceDatanode(block, containingNodes, numReplicas);
-        int usableReplicas = numReplicas.liveReplicas() +
-                             numReplicas.decommissionedReplicas();
+        int usableReplicas = numReplicas.liveReplicas()
+            + numReplicas.decommissionedReplicas();
         // l: == live:, d: == decommissioned c: == corrupt e: == excess
-        out.print(block + " (replicas:" +
-                  " l: " + numReplicas.liveReplicas() +
-                  " d: " + numReplicas.decommissionedReplicas() +
-                  " c: " + numReplicas.corruptReplicas() +
-                  " e: " + numReplicas.excessReplicas() +
-                  ((usableReplicas > 0)? "" : " MISSING") + ")");
+        out.print(block + " (replicas:" + " l: " + numReplicas.liveReplicas()
+            + " d: " + numReplicas.decommissionedReplicas() + " c: "
+            + numReplicas.corruptReplicas() + " e: "
+            + numReplicas.excessReplicas()
+            + ((usableReplicas > 0) ? "" : " MISSING") + ")");
 
-        for (Iterator<DatanodeDescriptor> jt = blocksMap.nodeIterator(block);
-             jt.hasNext();) {
+        for (Iterator<DatanodeDescriptor> jt = blocksMap.nodeIterator(block); 
jt
+            .hasNext();) {
           DatanodeDescriptor node = jt.next();
           out.print(" " + node + " : ");
         }
         out.println("");
{noformat}

Without such changes, it would have been much simpler to port the patches (just 
by changing the file name in patch file.


> Separate block/replica management code from FSNamesystem
> --------------------------------------------------------
>
>                 Key: HADOOP-5015
>                 URL: https://issues.apache.org/jira/browse/HADOOP-5015
>             Project: Hadoop Core
>          Issue Type: Improvement
>          Components: dfs
>            Reporter: Hairong Kuang
>            Assignee: Suresh Srinivas
>             Fix For: 0.21.0
>
>         Attachments: blkmanager.patch, blkmanager.patch
>
>
> Currently FSNamesystem contains a big amount of code that manages blocks and 
> replicas. The code scatters in FSNamesystem and it is hard to read and 
> maintain. It would be nice to move the code to a separate class called, for 
> example, BlockManager. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to