sodonnel commented on code in PR #3882:
URL: https://github.com/apache/ozone/pull/3882#discussion_r1004371929


##########
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/WritableECContainerProvider.java:
##########
@@ -93,16 +93,13 @@ public WritableECContainerProvider(ConfigurationSource conf,
   public ContainerInfo getContainer(final long size,
       ECReplicationConfig repConfig, String owner, ExcludeList excludeList)
       throws IOException, TimeoutException {
-    // Bound this at a minimum of 1 byte in case a request is made for a very
-    // small size, which when divided by EC DataNum is zero.
-    long requiredSpace = Math.max(1, size / repConfig.getData());
     synchronized (this) {
       int openPipelineCount = pipelineManager.getPipelineCount(repConfig,
           Pipeline.PipelineState.OPEN);
       if (openPipelineCount < providerConfig.getMinimumPipelines()) {
         try {
           return allocateContainer(
-              repConfig, requiredSpace, owner, excludeList);
+              repConfig, size, owner, excludeList);

Review Comment:
   The cluster blocksize is 256MB, but when we ask for an EC block, we are 
really asking for 256MB * dataNum.
   
   Right now, we only pass 256MB down to SCM and do not adjust it for the 
dataNum. Then in SCM, right now we are dividing it by dataNum.
   
   Now I look at this again, I think you are correct in what you said before. 
We should not divide this in SCM, as its already the "single replica block 
size" that is being passed in.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to