[ 
https://issues.apache.org/jira/browse/HADOOP-3938?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12631715#action_12631715
 ] 

Steve Loughran commented on HADOOP-3938:
----------------------------------------

Konstantin said
> A unit test would be hard to write for the case since there are no valid ways 
> to reproduce the condition.

you could a functional test with a vmware/xen image, though it would take a lot 
of work. The alternate tactic is to have a mock implementation of the code to 
determine disk space use, and simulate failures when a node comes up.

> Quotas for disk space management
> --------------------------------
>
>                 Key: HADOOP-3938
>                 URL: https://issues.apache.org/jira/browse/HADOOP-3938
>             Project: Hadoop Core
>          Issue Type: New Feature
>          Components: dfs
>            Reporter: Robert Chansler
>            Assignee: Raghu Angadi
>             Fix For: 0.19.0
>
>         Attachments: HADOOP-3938.patch, HADOOP-3938.patch, HADOOP-3938.patch, 
> HADOOP-3938.patch, HADOOP-3938.patch, HADOOP-3938.patch, 
> hdfs_quota_admin_guide.pdf, hdfs_quota_admin_guide.xml
>
>
> Directory quotas for bytes limit the number of bytes used by files in and 
> below the directory. Operation is independent of name quotas (HADOOP-3187), 
> but the implementation is parallel. Each file is charged according to its 
> length multiplied by its intended replication factor.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to