sodonnel commented on code in PR #3882:
URL: https://github.com/apache/ozone/pull/3882#discussion_r1005487145
##########
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/WritableECContainerProvider.java:
##########
@@ -93,16 +93,13 @@ public WritableECContainerProvider(ConfigurationSource conf,
public ContainerInfo getContainer(final long size,
ECReplicationConfig repConfig, String owner, ExcludeList excludeList)
throws IOException, TimeoutException {
- // Bound this at a minimum of 1 byte in case a request is made for a very
- // small size, which when divided by EC DataNum is zero.
- long requiredSpace = Math.max(1, size / repConfig.getData());
synchronized (this) {
int openPipelineCount = pipelineManager.getPipelineCount(repConfig,
Pipeline.PipelineState.OPEN);
if (openPipelineCount < providerConfig.getMinimumPipelines()) {
try {
return allocateContainer(
- repConfig, requiredSpace, owner, excludeList);
+ repConfig, size, owner, excludeList);
Review Comment:
I think our two messages crossed over.
The issue we have, is that the container size limit is 5GB for both Ratis
and EC. But for EC, each container in the group has a capacity of 5GB. When we
are searching for a container with free space, we use just the 5GB limit.
Right now, the OM code just passes the cluster block size down to SCM to
request a container. Also now, we are dividing that by dataNum. However that is
not correct, which you pointed out earlier.
The EC logical block is 256MB * dataNum, but since we don't scale up the EC
container size limit, we should just use blockSize without multiplying or
dividing it.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]