[
https://issues.apache.org/jira/browse/HDFS-14468?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
eBugs in Cloud Systems updated HDFS-14468:
------------------------------------------
Description:
Dear HDFS developers, we are developing a tool to detect exception-related bugs
in Java. Our prototype has spotted the following three {{throw}} statements
whose exception class and error message seem to indicate different error
conditions. Since we are not very familiar with HDFS's internal work flow,
could you please help us verify if this is a bug, i.e., will the callers have
trouble handling the exception, and will the users/admins have trouble
diagnosing the failure?
Version: Hadoop-3.1.2
File:
HADOOP-ROOT/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/checker/StorageLocationChecker.java
Line: 96-98, 110-113, and 173-176
{code:java}
throw new DiskErrorException("Invalid value configured for "
+ DFS_DATANODE_DISK_CHECK_TIMEOUT_KEY + " - "
+ maxAllowedTimeForCheckMs + " (should be > 0)");{code}
{code:java}
throw new DiskErrorException("Invalid value configured for "
+ DFS_DATANODE_FAILED_VOLUMES_TOLERATED_KEY + " - "
+ maxVolumeFailuresTolerated + " "
+ DataNode.MAX_VOLUME_FAILURES_TOLERATED_MSG);{code}
{code:java}
throw new DiskErrorException("Invalid value configured for "
+ DFS_DATANODE_FAILED_VOLUMES_TOLERATED_KEY + " - "
+ maxVolumeFailuresTolerated + ". Value configured is >= "
+ "to the number of configured volumes (" + dataDirs.size() + ").");{code}
Reason: A {{DiskErrorException}} means an error has occurred when the process
is interacting with the disk, e.g., in
{{org.apache.hadoop.util.DiskChecker.checkDirInternal()}} we have the following
code (lines 97-98):
{code:java}
throw new DiskErrorException("Cannot create directory: " +
dir.toString());{code}
However, the error messages of the first three exceptions indicate that the
{{StorageLocationChecker}} is configured incorrectly, which means there is
nothing wrong with the disk (yet). Will this mismatch be a problem? For
example, will the callers try to handle other {{DiskErrorException}}
accidentally (and incorrectly) handle the configuration error?
was:
Dear HDFS developers, we are developing a tool to detect exception-related bugs
in Java. Our prototype has spotted the following four {{throw}} statements
whose exception class and error message seem to indicate different error
conditions. Since we are not very familiar with HDFS's internal work flow,
could you please help us verify if this is a bug, i.e., will the callers have
trouble handling the exception, and will the users/admins have trouble
diagnosing the failure?
Version: Hadoop-3.1.2
File:
HADOOP-ROOT/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/checker/StorageLocationChecker.java
Line: 96-98, 110-113, and 173-176
{code:java}
throw new DiskErrorException("Invalid value configured for "
+ DFS_DATANODE_DISK_CHECK_TIMEOUT_KEY + " - "
+ maxAllowedTimeForCheckMs + " (should be > 0)");{code}
{code:java}
throw new DiskErrorException("Invalid value configured for "
+ DFS_DATANODE_FAILED_VOLUMES_TOLERATED_KEY + " - "
+ maxVolumeFailuresTolerated + " "
+ DataNode.MAX_VOLUME_FAILURES_TOLERATED_MSG);{code}
{code:java}
throw new DiskErrorException("Invalid value configured for "
+ DFS_DATANODE_FAILED_VOLUMES_TOLERATED_KEY + " - "
+ maxVolumeFailuresTolerated + ". Value configured is >= "
+ "to the number of configured volumes (" + dataDirs.size() + ").");{code}
Reason: A {{DiskErrorException}} means an error has occurred when the process
is interacting with the disk, e.g., in
{{org.apache.hadoop.util.DiskChecker.checkDirInternal()}} we have the following
code (lines 97-98):
{code:java}
throw new DiskErrorException("Cannot create directory: " +
dir.toString());{code}
However, the error messages of the first three exceptions indicate that the
{{StorageLocationChecker}} is configured incorrectly, which means there is
nothing wrong with the disk (yet). Will this mismatch be a problem? For
example, will the callers try to handle other {{DiskErrorException}}
accidentally (and incorrectly) handle the configuration error?
> StorageLocationChecker methods throw DiskErrorExceptions when the
> configuration has wrong values
> ------------------------------------------------------------------------------------------------
>
> Key: HDFS-14468
> URL: https://issues.apache.org/jira/browse/HDFS-14468
> Project: Hadoop HDFS
> Issue Type: Bug
> Reporter: eBugs in Cloud Systems
> Priority: Minor
>
> Dear HDFS developers, we are developing a tool to detect exception-related
> bugs in Java. Our prototype has spotted the following three {{throw}}
> statements whose exception class and error message seem to indicate different
> error conditions. Since we are not very familiar with HDFS's internal work
> flow, could you please help us verify if this is a bug, i.e., will the
> callers have trouble handling the exception, and will the users/admins have
> trouble diagnosing the failure?
>
> Version: Hadoop-3.1.2
> File:
> HADOOP-ROOT/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/checker/StorageLocationChecker.java
> Line: 96-98, 110-113, and 173-176
> {code:java}
> throw new DiskErrorException("Invalid value configured for "
> + DFS_DATANODE_DISK_CHECK_TIMEOUT_KEY + " - "
> + maxAllowedTimeForCheckMs + " (should be > 0)");{code}
> {code:java}
> throw new DiskErrorException("Invalid value configured for "
> + DFS_DATANODE_FAILED_VOLUMES_TOLERATED_KEY + " - "
> + maxVolumeFailuresTolerated + " "
> + DataNode.MAX_VOLUME_FAILURES_TOLERATED_MSG);{code}
> {code:java}
> throw new DiskErrorException("Invalid value configured for "
> + DFS_DATANODE_FAILED_VOLUMES_TOLERATED_KEY + " - "
> + maxVolumeFailuresTolerated + ". Value configured is >= "
> + "to the number of configured volumes (" + dataDirs.size() + ").");{code}
> Reason: A {{DiskErrorException}} means an error has occurred when the process
> is interacting with the disk, e.g., in
> {{org.apache.hadoop.util.DiskChecker.checkDirInternal()}} we have the
> following code (lines 97-98):
> {code:java}
> throw new DiskErrorException("Cannot create directory: " +
> dir.toString());{code}
> However, the error messages of the first three exceptions indicate that the
> {{StorageLocationChecker}} is configured incorrectly, which means there is
> nothing wrong with the disk (yet). Will this mismatch be a problem? For
> example, will the callers try to handle other {{DiskErrorException}}
> accidentally (and incorrectly) handle the configuration error?
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]