[ 
https://issues.apache.org/jira/browse/HDFS-9260?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15102574#comment-15102574
 ] 

Colin Patrick McCabe commented on HDFS-9260:
--------------------------------------------

Thanks for working on this, [~sfriberg].  It looks promising.

{code}
 >   @Override
 >   boolean removeStorage(DatanodeStorageInfo storage) {
 >     int dnIndex = findStorageInfoFromEnd(storage);
 >     if (dnIndex < 0) { // the node is not found
 >       return false;
 >     }
 >     // set the triplet to null
 >     setStorageInfo(dnIndex, null);
 >     indices[dnIndex] = -1;
 >     return true;
 >   }
{code}

This still refers to "the triplet" but there are no more triplets, right?  
There are some other comments referencing "triplets" in {{BlockInfoStriped}} 
that should be fixed as well.

What is the strategy for shrinking {{BlockInfo#storages}}?  It seems like right 
now {{setStorageInfo(<index>, null)}} will create a "hole" in the array, but it 
is never actually shrunk.

{code}
 >      // Remove here for now as removeStoredBlock will do it otherwise
 >      // and cause concurrent modification exception
{code}

This comment could be clearer.  How about "we must remove the block via the 
iterator"?

{code}
 >       if (shouldPostponeBlocksFromFuture) {
 >         // If the block is an out-of-date generation stamp or state,
 >         // but we're the standby, we shouldn't treat it as corrupt,
 >         // but instead just queue it for later processing.
 >         // TODO: Pretty confident this should be s/storedBlock/block below,
 >         // since we should be postponing the info of the reported block, not
 >         // the stored block. See HDFS-6289 for more context.
 >         queueReportedBlock(storageInfo, storedBlock, reportedState,
 >             QUEUE_REASON_CORRUPT_STATE);
 >       } else {
{code}

If we're really confident that this should be "block" rather than 
"storedBlock", let's fix it.

{code}
@@ -122,8 +91,9 @@ BlockInfo addBlockCollection(BlockInfo b, BlockCollection 
bc) {
    */
   void removeBlock(Block block) {
     BlockInfo blockInfo = blocks.remove(block);
-    if (blockInfo == null)
+    if (blockInfo == null) {
       return;
+    }
...
@@ -191,8 +177,9 @@ int numNodes(Block b) {
    */
   boolean removeNode(Block b, DatanodeDescriptor node) {
     BlockInfo info = blocks.get(b);
-    if (info == null)
+    if (info == null) {
       return false;
+    }
{code}
Let's try to avoid "no-op" changes like this in this patch, since it's already 
pretty big.  We can fix whitespace and so forth in other JIRAs to avoid 
creating confusion about what was changed here.

{code}
      return set != null ? set.get(blockId, LONG_AND_BLOCK_COMPARATOR) : null;
{code}
This might be simpler as:
{code}
if (set == null) {
    return null;
}
return set.get(blockId, LONG_AND_BLOCK_COMPARATOR);
{code}

{code}
   /**
    * Add a replica's meta information into the map
    *
    * @param bpid block pool id
    * @param replicaInfo a replica's meta information
-   * @return previous meta information of the replica
+   * @return true if inserted into the set
    * @throws IllegalArgumentException if the input parameter is null
    */
-  ReplicaInfo add(String bpid, ReplicaInfo replicaInfo) {
+  boolean add(String bpid, ReplicaInfo replicaInfo) {
{code}
I would like to see some clear comments in this function on what happens if 
there is already a copy of the replicaInfo in the ReplicaMap.  I might be 
wrong, but based on my reading of TreeSet.java, it seems like the new entry 
won't be added, which is a behavior change from what we did earlier.  Unless 
I'm missing something, this doesn't seem quite right since the new ReplicaInfo 
might have a different genstamp, etc.

> Improve performance and GC friendliness of startup and FBRs
> -----------------------------------------------------------
>
>                 Key: HDFS-9260
>                 URL: https://issues.apache.org/jira/browse/HDFS-9260
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: datanode, namenode, performance
>    Affects Versions: 2.7.1
>            Reporter: Staffan Friberg
>            Assignee: Staffan Friberg
>         Attachments: FBR processing.png, HDFS Block and Replica Management 
> 20151013.pdf, HDFS-7435.001.patch, HDFS-7435.002.patch, HDFS-7435.003.patch, 
> HDFS-7435.004.patch, HDFS-7435.005.patch, HDFS-7435.006.patch, 
> HDFS-7435.007.patch, HDFS-9260.008.patch, HDFS-9260.009.patch, 
> HDFS-9260.010.patch, HDFSBenchmarks.zip, HDFSBenchmarks2.zip
>
>
> This patch changes the datastructures used for BlockInfos and Replicas to 
> keep them sorted. This allows faster and more GC friendly handling of full 
> block reports.
> Would like to hear peoples feedback on this change.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to