[ 
https://issues.apache.org/jira/browse/HDDS-1511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16836659#comment-16836659
 ] 

Arpit Agarwal commented on HDDS-1511:
-------------------------------------

bq. Perhaps a better investment is to write code that handles the container 
creation or chunk write failure, or come up with an algorithm to look at what 
HDFS is doing too.
[~anu], you are absolutely right. We should and we will make sure we robustly 
handle IO failures.

This particular Jira fixes a very limited scenario of putting new containers on 
obviously full volumes. It is not a substitute for robust failure handling.

> Space tracking for Open Containers in HDDS Volumes
> --------------------------------------------------
>
>                 Key: HDDS-1511
>                 URL: https://issues.apache.org/jira/browse/HDDS-1511
>             Project: Hadoop Distributed Data Store
>          Issue Type: Improvement
>          Components: Ozone Datanode
>            Reporter: Supratim Deka
>            Assignee: Supratim Deka
>            Priority: Major
>         Attachments: HDDS-1511.000.patch
>
>
> For every HDDS Volume, track the space usage in open containers. Introduce a 
> counter committedBytes in HddsVolume - this counts the remaining space in 
> Open containers until they reach max capacity. The counter is incremented (by 
> container max capacity) for every container create. And decremented (by chunk 
> size) for every chunk write.
> Space tracking for open containers will enable adding a safety check during 
> container create.
> If there is not sufficient free space in the volume, the container create 
> operation can be failed.
> The scope of this jira is to just add the space tracking for Open Containers. 
> Checking for space and failing container create will be introduced in a 
> subsequent jira.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to