Brahma Reddy Battula commented on HDFS-6489:

bq.Even with this, dfsUsed and numblocks counting is all messed up. e.g. 
FsDatasetImpl.removeOldBlock calls decDfsUsedAndNumBlocks twice (so even though 
dfsUsed is correctly decremented, numBlocks is not)
Nope. I believe you read this wrong.
{{FsDatasetImpl#removeOldReplica}} call two seperate calls 
{{onBlockFileDeletion(..)}} and {{onMetaFileDeletion(...)}} for {{blockfile}} 
and {{metafile}} respectively.
    // Remove the old replicas
    if (replicaInfo.deleteBlockData() || !replicaInfo.blockDataExists()) {
      FsVolumeImpl volume = (FsVolumeImpl) replicaInfo.getVolume();
      volume.onBlockFileDeletion(bpid, replicaInfo.getBytesOnDisk());
      if (replicaInfo.deleteMetadata() || !replicaInfo.metadataExists()) {
        volume.onMetaFileDeletion(bpid, replicaInfo.getMetadataLength());

 *Code from {{FsVolumeImpl.java}}* 
 void onBlockFileDeletion(String bpid, long value) {
    decDfsUsedAndNumBlocks(bpid, value, true);
    if (isTransientStorage()) {
      dataset.releaseLockedMemory(value, true);

  void onMetaFileDeletion(String bpid, long value) {
    decDfsUsedAndNumBlocks(bpid, value, false);

  private void decDfsUsedAndNumBlocks(String bpid, long value,
                                      boolean blockFileDeleted) {
    try(AutoCloseableLock lock = dataset.acquireDatasetLock()) {
      BlockPoolSlice bp = bpSlices.get(bpid);
      if (bp != null) {
        if (blockFileDeleted) {

{{onBlockFileDeletion(..)}} calls {{decDfsUsedAndNumBlocks(bpid, value, 
true);}} with {{blockFileDeleted}} flag as {{true}} to drement the 
{{numblocks}},where as   {{onMetaFileDeletion(...)}} calls 
{{decDfsUsedAndNumBlocks(bpid, value, false);}} with {{blockFileDeleted}} flag 
as {{false}}.Because,no need to decrement the {{numblocks}} for metafile 

bq.Also, what do you think about a robust unit-test framework to find out all 
these issues?

Only way is to list all write/delete cases and write tests for that

 *Comments for this Jira* 

1) only {{incDfsUsed()}} should be used, as {{numBlocks}} will be updated 
during {{createRbw()}} for new blocks. For {{append}} incrementing 
{{numBlocks}} not required.
2) Previous metadata length also should be deducted.
860         if(b instanceof ReplicaInfo) {
861           ReplicaInfo replicaInfo  = ((ReplicaInfo) b);
862           if(replicaInfo.getState() == ReplicaState.RBW) {
863             ReplicaInPipeline rip = (ReplicaInPipeline) replicaInfo;
864             // rip.getOriginalBytesReserved() - rip.getBytesReserved()
865             // is the amount of data that was written to the replica
866             long bytesAdded = rip.getOriginalBytesReserved() -
867             rip.getBytesReserved() + replicaInfo.getMetaFile().length();
868             incDfsUsedAndNumBlocks(bpid, bytesAdded);
869           }
870         }

Sorry for late reply.

> DFS Used space is not correct computed on frequent append operations
> --------------------------------------------------------------------
>                 Key: HDFS-6489
>                 URL: https://issues.apache.org/jira/browse/HDFS-6489
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: datanode
>    Affects Versions: 2.2.0, 2.7.1, 2.7.2
>            Reporter: stanley shi
>            Assignee: Weiwei Yang
>         Attachments: HDFS-6489.001.patch, HDFS-6489.002.patch, 
> HDFS-6489.003.patch, HDFS-6489.004.patch, HDFS-6489.005.patch, 
> HDFS-6489.006.patch, HDFS-6489.007.patch, HDFS6489.java
> The current implementation of the Datanode will increase the DFS used space 
> on each block write operation. This is correct in most scenario (create new 
> file), but sometimes it will behave in-correct(append small data to a large 
> block).
> For example, I have a file with only one block(say, 60M). Then I try to 
> append to it very frequently but each time I append only 10 bytes;
> Then on each append, dfs used will be increased with the length of the 
> block(60M), not teh actual data length(10bytes).
> Consider in a scenario I use many clients to append concurrently to a large 
> number of files (1000+), assume the block size is 32M (half of the default 
> value), then the dfs used will be increased 1000*32M = 32G on each append to 
> the files; but actually I only write 10K bytes; this will cause the datanode 
> to report in-sufficient disk space on data write.
> {quote}2014-06-04 15:27:34,719 INFO 
> org.apache.hadoop.hdfs.server.datanode.DataNode: opWriteBlock  
> BP-1649188734- received 
> exception org.apache.hadoop.util.DiskChecker$DiskOutOfSpaceException: 
> Insufficient space for appending to FinalizedReplica, blk_1073742834_45306, 
> FINALIZED{quote}
> But the actual disk usage:
> {quote}
> [root@hdsh143 ~]# df -h
> Filesystem            Size  Used Avail Use% Mounted on
> /dev/sda3              16G  2.9G   13G  20% /
> tmpfs                 1.9G   72K  1.9G   1% /dev/shm
> /dev/sda1              97M   32M   61M  35% /boot
> {quote}

This message was sent by Atlassian JIRA

To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to