[ https://issues.apache.org/jira/browse/HDFS-13863?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16598144#comment-16598144 ]
Hudson commented on HDFS-13863: ------------------------------- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14855 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/14855/]) HDFS-13863. FsDatasetImpl should log DiskOutOfSpaceException. (yqlin: rev 582cb10ec74ed5666946a3769002ceb80ba660cb) * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java > FsDatasetImpl should log DiskOutOfSpaceException > ------------------------------------------------ > > Key: HDFS-13863 > URL: https://issues.apache.org/jira/browse/HDFS-13863 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs > Affects Versions: 3.1.0, 2.9.1, 3.0.3 > Reporter: Fei Hui > Assignee: Fei Hui > Priority: Major > Fix For: 3.2.0, 3.0.4, 3.1.2 > > Attachments: HDFS-13863.001.patch, HDFS-13863.002.patch, > HDFS-13863.003.patch > > > The code in function *createRbw* as follow > {code:java} > try { > // First try to place the block on a transient volume. > ref = volumes.getNextTransientVolume(b.getNumBytes()); > datanode.getMetrics().incrRamDiskBlocksWrite(); > } catch (DiskOutOfSpaceException de) { > // Ignore the exception since we just fall back to persistent > storage. > } finally { > if (ref == null) { > cacheManager.release(b.getNumBytes()); > } > } > {code} > I think we should log the exception because it took me long time to resolve > problems, and maybe others face the same problems. > When i test ram_disk, i found no data was written into randomdisk. I debug, > deep into the source code, and found that randomdisk size was less than > reserved space. I think if message was logged, i would resolve the problem > quickly. -- This message was sent by Atlassian JIRA (v7.6.3#76005) --------------------------------------------------------------------- To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org