[ 
https://issues.apache.org/jira/browse/HADOOP-3938?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12629335#action_12629335
 ] 

shv edited comment on HADOOP-3938 at 9/8/08 5:23 PM:
---------------------------------------------------------------------

# It is better to implement
{code}
public ContentSummary(long length, long fileCount, long directoryCount) {
 this(length, fileCount, directoryCount, length, -1)
}
{code}
# DFSClient in {{locateFollowingBlock()}} you do not need to throw unwrapped 
exception in order to check whether it is a RemoteException.
And also you do not throw if the remote exception is not of any of the 4 types 
considered.
{code}
try {
 return namenode.addBlock(src, clientName);
} catch (RemoteException re) {
  IOException ue = re.unwrapRemoteException(FileNotFoundException.class,
                                            AccessControlException.class,
                                            QuotaExceededException.class);
  if(re != ue)
    throw ue;
  ue = re.unwrapRemoteException(NotReplicatedYetException.class);
  if(--retries == 0 && re == ue)  // not a NotReplicatedYetException
    throw re;
  LOG.warn("NotReplicatedYetException sleeping " + src + " retries left " + 
retries);
  try {
    Thread.sleep(sleeptime);
    sleeptime *= 2;
  } catch (InterruptedException ie) {}
}
{code}
# JavaDoc for {{QuotaExceededException}} should reflect new space quota related 
semantics.
# In {{QuotaExceededException.getMessage()}}  {{" file count="}} instead of {{" 
namespace count="}} may sound better.
# FSDirectory does not need to {{import StringUtils}}.
# {{FSDirectory.unprotectedAddFile()}} should probably not add blocks first and 
then remove them
if the space quota is violated. The blocks can be added once the node is 
successfully created,
you may need to pass the size of the file to {{addNode()}}.
# {{numItemsInTree()}} followed by {{diskspaceInTree()}} is called on several 
occasions: {{INodeDirectoryWithQuota(), addChild(), removeChild()}}. 
This is very inefficient, tree traversal should be  be done only once. 
You can use something like the {{TwoCounters}} class but with a more meaningful 
name, say DirectoryQuota.
I think {{numItemsInTree()}} and {{diskspaceInTree()}} should be merged in one 
method preferably non-recursive.
# unprotectedSetQuota() should take 2 longs rather than a long and a boolean
{code}
private void unprotectedSetQuota(String src, long nsQuota, long dsQuota)
{code}
as in other places, e.g. addToParent().
# I am not sure we need to add an extra pair of set/clear-DiskspaceQuota(), may 
be we just need 
an extra parameter in the old {{setQuota(src, nsQuota, dsQuota)}}
# Should we check space quota when we start an append?
# NameNode redundantly imports BlockCommand (not introduced by this patch).


      was (Author: shv):
    - It is better to implement
{code}
public ContentSummary(long length, long fileCount, long directoryCount) {
 this(length, fileCount, directoryCount, length, -1)
}
{code}
- DFSClient in {{locateFollowingBlock()}} you do not need to throw unwrapped 
exception in order to check whether it is a RemoteException.
And also you do not throw if the remote exception is not of any of the 4 types 
considered.
{code}
try {
 return namenode.addBlock(src, clientName);
} catch (RemoteException re) {
  IOException ue = re.unwrapRemoteException(FileNotFoundException.class,
                                            AccessControlException.class,
                                            QuotaExceededException.class);
  if(re != ue)
    throw ue;
  ue = re.unwrapRemoteException(NotReplicatedYetException.class);
  if(--retries == 0 && re == ue)  // not a NotReplicatedYetException
    throw re;
  LOG.warn("NotReplicatedYetException sleeping " + src + " retries left " + 
retries);
  try {
    Thread.sleep(sleeptime);
    sleeptime *= 2;
  } catch (InterruptedException ie) {}
}
{code}
- JavaDoc for {{QuotaExceededException}} should reflect new space quota related 
semantics.
- In {{QuotaExceededException.getMessage()}}  {{" file count="}} instead of {{" 
namespace count="}} may sound better.
- FSDirectory does not need to {{import StringUtils}}.
- {{FSDirectory.unprotectedAddFile()}} should probably not add blocks first and 
then remove them
if the space quota is violated. The blocks can be added once the node is 
successfully created,
you may need to pass the size of the file to {{addNode()}}.
- {{numItemsInTree()}} followed by {{diskspaceInTree()}} is called on several 
occasions: {{INodeDirectoryWithQuota(), addChild(), removeChild()}}. 
This is very inefficient, tree traversal should be  be done only once. 
You can use something like the {{TwoCounters}} class but with a more meaningful 
name, say DirectoryQuota.
I think {{numItemsInTree()}} and {{diskspaceInTree()}} should be merged in one 
method preferably non-recursive.
- unprotectedSetQuota() should take 2 longs rather than a long and a boolean
{code}
private void unprotectedSetQuota(String src, long nsQuota, long dsQuota)
{code}
is in other places, e.g. addToParent().
- I am not sure we need to add an extra pair of set/clear-DiskspaceQuota(), may 
be we just need 
an extra parameter in the old {{setQuota(src, nsQuota, dsQuota)}}
- Should we check space quota when we start an append?
- NameNode redundantly imports BlockCommand (not introduced by this patch).

  
> Quotas for disk space management
> --------------------------------
>
>                 Key: HADOOP-3938
>                 URL: https://issues.apache.org/jira/browse/HADOOP-3938
>             Project: Hadoop Core
>          Issue Type: New Feature
>          Components: dfs
>            Reporter: Robert Chansler
>            Assignee: Raghu Angadi
>             Fix For: 0.19.0
>
>         Attachments: HADOOP-3938.patch, HADOOP-3938.patch, HADOOP-3938.patch, 
> HADOOP-3938.patch, hdfs_quota_admin_guide.pdf, hdfs_quota_admin_guide.xml
>
>
> Directory quotas for bytes limit the number of bytes used by files in and 
> below the directory. Operation is independent of name quotas (HADOOP-3187), 
> but the implementation is parallel. Each file is charged according to its 
> length multiplied by its intended replication factor.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to