[
https://issues.apache.org/jira/browse/HDFS-14469?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
eBugs in Cloud Systems updated HDFS-14469:
------------------------------------------
Description:
Dear HDFS developers, we are developing a tool to detect exception-related bugs
in Java. Our prototype has spotted the following {{throw}} statement whose
exception class and error message seem to indicate different error conditions.
Version: Hadoop-3.1.2
File:
HADOOP-ROOT/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
Line: 294-297
{code:java}
throw new DiskErrorException("Invalid value configured for "
+ "dfs.datanode.failed.volumes.tolerated - " + volFailuresTolerated
+ ". Value configured is either less than maxVolumeFailureLimit or greater
than "
+ "to the number of configured volumes (" + volsConfigured + ").");{code}
A {{DiskErrorException}} means an error has occurred when the process is
interacting with the disk, e.g., in
{{org.apache.hadoop.util.DiskChecker.checkDirInternal()}} we have the following
code (lines 97-98):
{code:java}
throw new DiskErrorException("Cannot create directory: " +
dir.toString());{code}
However, the error message of the first exception indicates that
{{dfs.datanode.failed.volumes.tolerated}} is configured incorrectly, which
means there is nothing wrong with the disk (yet). Will this mismatch be a
problem? For example, the callers trying to handle other {{DiskErrorException}}
may accidentally (and incorrectly) handle the configuration error.
was:
Dear HDFS developers, we are developing a tool to detect exception-related bugs
in Java. Our prototype has spotted the following {{throw}} statement whose
exception class and error message seem to indicate different error conditions.
Since we are not very familiar with HDFS's internal work flow, could you please
help us verify if this is a bug, i.e., will the callers have trouble handling
the exception, and will the users/admins have trouble diagnosing the failure?
Version: Hadoop-3.1.2
File:
HADOOP-ROOT/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
Line: 294-297
{code:java}
throw new DiskErrorException("Invalid value configured for "
+ "dfs.datanode.failed.volumes.tolerated - " + volFailuresTolerated
+ ". Value configured is either less than maxVolumeFailureLimit or greater
than "
+ "to the number of configured volumes (" + volsConfigured + ").");{code}
Reason: A {{DiskErrorException}} means an error has occurred when the process
is interacting with the disk, e.g., in
{{org.apache.hadoop.util.DiskChecker.checkDirInternal()}} we have the following
code (lines 97-98):
{code:java}
throw new DiskErrorException("Cannot create directory: " +
dir.toString());{code}
However, the error message of the first exception indicates that
{{dfs.datanode.failed.volumes.tolerated}} is configured incorrectly, which
means there is nothing wrong with the disk (yet). Will this mismatch be a
problem? For example, will the callers trying to handle other
{{DiskErrorException}} accidentally (and incorrectly) handle the configuration
error?
> FsDatasetImpl() throws a DiskErrorException when the configuration has wrong
> values
> -----------------------------------------------------------------------------------
>
> Key: HDFS-14469
> URL: https://issues.apache.org/jira/browse/HDFS-14469
> Project: Hadoop HDFS
> Issue Type: Bug
> Reporter: eBugs in Cloud Systems
> Priority: Minor
>
> Dear HDFS developers, we are developing a tool to detect exception-related
> bugs in Java. Our prototype has spotted the following {{throw}} statement
> whose exception class and error message seem to indicate different error
> conditions.
>
> Version: Hadoop-3.1.2
> File:
> HADOOP-ROOT/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
> Line: 294-297
> {code:java}
> throw new DiskErrorException("Invalid value configured for "
> + "dfs.datanode.failed.volumes.tolerated - " + volFailuresTolerated
> + ". Value configured is either less than maxVolumeFailureLimit or
> greater than "
> + "to the number of configured volumes (" + volsConfigured + ").");{code}
>
> A {{DiskErrorException}} means an error has occurred when the process is
> interacting with the disk, e.g., in
> {{org.apache.hadoop.util.DiskChecker.checkDirInternal()}} we have the
> following code (lines 97-98):
> {code:java}
> throw new DiskErrorException("Cannot create directory: " +
> dir.toString());{code}
> However, the error message of the first exception indicates that
> {{dfs.datanode.failed.volumes.tolerated}} is configured incorrectly, which
> means there is nothing wrong with the disk (yet). Will this mismatch be a
> problem? For example, the callers trying to handle other
> {{DiskErrorException}} may accidentally (and incorrectly) handle the
> configuration error.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]