adoroszlai commented on code in PR #8080:
URL: https://github.com/apache/ozone/pull/8080#discussion_r2000251003


##########
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/HddsUtils.java:
##########
@@ -724,6 +724,11 @@ public static boolean 
shouldNotFailoverOnRpcException(Throwable exception) {
     return exception instanceof InvalidProtocolBufferException;
   }
 
+  public static long requiredReplicationSpace(long defaultContainerSize) {
+    // During container import it requires double the container size to hold 
container in tmp and dest directory
+    return 2 * defaultContainerSize;
+  }
+

Review Comment:
   Code in `hdds-common` is shared between client and server.  
`requiredReplicationSpace` is only for servers.  Please move it to 
`HddsServerUtil`.



##########
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/replication/ReplicationManagerUtil.java:
##########
@@ -78,7 +79,7 @@ public static List<DatanodeDetails> 
getTargetDatanodes(PlacementPolicy policy,
     // Ensure that target datanodes have enough space to hold a complete
     // container.
     final long dataSizeRequired =
-        Math.max(container.getUsedBytes(), defaultContainerSize);
+        Math.max(container.getUsedBytes(), 
HddsUtils.requiredReplicationSpace(defaultContainerSize));

Review Comment:
   Shouldn't 2x apply to the result of (the original) `max`?
   
   Example: if the container is 5GB and default size is 1GB, then we need 10GB 
on the datanode, not 5GB.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to