[ 
https://issues.apache.org/jira/browse/HDFS-13863?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16594766#comment-16594766
 ] 

Fei Hui commented on HDFS-13863:
--------------------------------

[~linyiqun] Thanks for you comments.
Two cases with DiskOutOfSpaceException:
* No more available volumes
* Out of space: "+ "The volume with the most available space (=" + maxAvailable 
+ " B) is less than the block size (=" + blockSize + " B).
Maybe the exception message could make us understand why fall back to 
persistent storage ?

> FsDatasetImpl should log DiskOutOfSpaceException
> ------------------------------------------------
>
>                 Key: HDFS-13863
>                 URL: https://issues.apache.org/jira/browse/HDFS-13863
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: hdfs
>    Affects Versions: 3.1.0, 2.9.1, 3.0.3
>            Reporter: Fei Hui
>            Assignee: Fei Hui
>            Priority: Major
>         Attachments: HDFS-13863.001.patch, HDFS-13863.002.patch
>
>
> The code in function *createRbw* as follow
> {code:java}
>         try {
>           // First try to place the block on a transient volume.
>           ref = volumes.getNextTransientVolume(b.getNumBytes());
>           datanode.getMetrics().incrRamDiskBlocksWrite();
>         } catch (DiskOutOfSpaceException de) {
>           // Ignore the exception since we just fall back to persistent 
> storage.
>         } finally {
>           if (ref == null) {
>             cacheManager.release(b.getNumBytes());
>           }
>         }
> {code}
> I think we should log the exception because it took me long time to resolve 
> problems, and maybe others face the same problems.
> When i test ram_disk, i found no data was written into randomdisk. I debug, 
> deep into the source code, and found that randomdisk size was less than 
> reserved space. I think if message was logged, i would resolve the problem 
> quickly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to