[ 
https://issues.apache.org/jira/browse/HADOOP-3938?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12631502#action_12631502
 ] 

Konstantin Shvachko commented on HADOOP-3938:
---------------------------------------------

On second thought my proposal is not good since it is exactly the way to 
produce such incorrect state.
Instead, we should just remove the restriction and let the server start with a 
warning.
A unit test would be hard to write for the case since there are no valid ways 
to reproduce the condition.

> Quotas for disk space management
> --------------------------------
>
>                 Key: HADOOP-3938
>                 URL: https://issues.apache.org/jira/browse/HADOOP-3938
>             Project: Hadoop Core
>          Issue Type: New Feature
>          Components: dfs
>            Reporter: Robert Chansler
>            Assignee: Raghu Angadi
>             Fix For: 0.19.0
>
>         Attachments: HADOOP-3938.patch, HADOOP-3938.patch, HADOOP-3938.patch, 
> HADOOP-3938.patch, HADOOP-3938.patch, HADOOP-3938.patch, 
> hdfs_quota_admin_guide.pdf, hdfs_quota_admin_guide.xml
>
>
> Directory quotas for bytes limit the number of bytes used by files in and 
> below the directory. Operation is independent of name quotas (HADOOP-3187), 
> but the implementation is parallel. Each file is charged according to its 
> length multiplied by its intended replication factor.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to