Fei Hui created HDFS-13863:
------------------------------
Summary: FsDatasetImpl should log DiskOutOfSpaceException
Key: HDFS-13863
URL: https://issues.apache.org/jira/browse/HDFS-13863
Project: Hadoop HDFS
Issue Type: Bug
Components: hdfs
Affects Versions: 3.0.3, 2.9.1, 3.1.0
Reporter: Fei Hui
Assignee: Fei Hui
The code in function *createRbw* as follow
{code:java}
try {
// First try to place the block on a transient volume.
ref = volumes.getNextTransientVolume(b.getNumBytes());
datanode.getMetrics().incrRamDiskBlocksWrite();
} catch (DiskOutOfSpaceException de) {
// Ignore the exception since we just fall back to persistent storage.
} finally {
if (ref == null) {
cacheManager.release(b.getNumBytes());
}
}
{code}
I think we should log the exception because it took me long time to resolve
problems, and maybe others face the same problems.
When i test ram_disk, i found no data was written into randomdisk. I debug,
deep into the source code, and found that randomdisk size was less than
reserved space. I think if message was logged, i would resolve the problem
quickly.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]