[ https://issues.apache.org/jira/browse/HDDS-1511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16836547#comment-16836547 ]
Anu Engineer commented on HDDS-1511: ------------------------------------ [~sdeka] Not sure how to handle this issue, Thought I will point it out. HDFS will be sharing these disks. So tracking free space based on Open containers will not address the use case you are mentioning. bq. Space tracking for open containers will enable adding a safety check during container create. If there is not sufficient free space in the volume, the container create operation can be failed. Perhaps a better investment is to write code that handles the container creation or chunk write failure, or come up with an algorithm to look at what HDFS is doing too. > Space tracking for Open Containers in HDDS Volumes > -------------------------------------------------- > > Key: HDDS-1511 > URL: https://issues.apache.org/jira/browse/HDDS-1511 > Project: Hadoop Distributed Data Store > Issue Type: Improvement > Components: Ozone Datanode > Reporter: Supratim Deka > Assignee: Supratim Deka > Priority: Major > Attachments: HDDS-1511.000.patch > > > For every HDDS Volume, track the space usage in open containers. Introduce a > counter committedBytes in HddsVolume - this counts the remaining space in > Open containers until they reach max capacity. The counter is incremented (by > container max capacity) for every container create. And decremented (by chunk > size) for every chunk write. > Space tracking for open containers will enable adding a safety check during > container create. > If there is not sufficient free space in the volume, the container create > operation can be failed. > The scope of this jira is to just add the space tracking for Open Containers. > Checking for space and failing container create will be introduced in a > subsequent jira. -- This message was sent by Atlassian JIRA (v7.6.3#76005) --------------------------------------------------------------------- To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org