[
https://issues.apache.org/jira/browse/HDFS-14467?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
eBugs in Cloud Systems updated HDFS-14467:
------------------------------------------
Summary: DatasetVolumeChecker() throws A DiskErrorException when the
configuration has wrong values (was: DatasetVolumeChecker() throws
DiskErrorException when the configuration has wrong values)
> DatasetVolumeChecker() throws A DiskErrorException when the configuration has
> wrong values
> ------------------------------------------------------------------------------------------
>
> Key: HDFS-14467
> URL: https://issues.apache.org/jira/browse/HDFS-14467
> Project: Hadoop HDFS
> Issue Type: Bug
> Affects Versions: 3.1.2
> Reporter: eBugs in Cloud Systems
> Priority: Minor
>
> Dear HDFS developers, we are developing a tool to detect exception-related
> bugs in Java. Our prototype has spotted the following four {{throw}}
> statements whose exception class and error message seem to indicate different
> error conditions. Since we are not very familiar with HDFS's internal work
> flow, could you please help us verify if this is a bug, i.e., will the
> callers have trouble handling the exception, and will the users/admins have
> trouble diagnosing the failure?
>
> Version: Hadoop-3.1.2
> File:
> HADOOP-ROOT/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/checker/DatasetVolumeChecker.java
> Line: 122-124, 139-141, 150-152, and 158-161
> {code:java}
> throw new DiskErrorException("Invalid value configured for "
> + DFS_DATANODE_DISK_CHECK_TIMEOUT_KEY + " - "
> + maxAllowedTimeForCheckMs + " (should be > 0)");{code}
> {code:java}
> throw new DiskErrorException("Invalid value configured for "
> + DFS_DATANODE_DISK_CHECK_MIN_GAP_KEY + " - "
> + minDiskCheckGapMs + " (should be >= 0)");{code}
> {code:java}
> throw new DiskErrorException("Invalid value configured for "
> + DFS_DATANODE_DISK_CHECK_TIMEOUT_KEY + " - "
> + diskCheckTimeout + " (should be >= 0)");{code}
> {code:java}
> throw new DiskErrorException("Invalid value configured for "
> + DFS_DATANODE_FAILED_VOLUMES_TOLERATED_KEY + " - "
> + maxVolumeFailuresTolerated + " "
> + DataNode.MAX_VOLUME_FAILURES_TOLERATED_MSG);{code}
> Reason: A {{DiskErrorException}} means an error has occurred when the process
> is interacting with the disk, e.g., in
> {{org.apache.hadoop.util.DiskChecker.checkDirInternal()}} we have the
> following code (lines 97-98):
> {code:java}
> throw new DiskErrorException("Cannot create directory: " +
> dir.toString());{code}
> However, the error messages of the first four exceptions indicate that the
> {{DatasetVolumeChecker}} is configured incorrectly, which means there is
> nothing wrong with the disk (yet). Will this mismatch be a problem? For
> example, will the callers try to handle other {{DiskErrorException}}
> accidentally (and incorrectly) handle the configuration error?
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]