[
https://issues.apache.org/jira/browse/HDFS-17056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17736174#comment-17736174
]
Ayush Saxena commented on HDFS-17056:
-------------------------------------
Found while trying 3.3.6 RC, minor stuff should be present in trunk as well...
A typical fix would be to add a simple if check like other commands in the
verifyClusterOutput command, something like this & things should work
{code:java}
throw e;
}
} else {
if (args.size() > 0) {
System.err.println(getName() + ": Too many arguments");
return 1;
}
result = dfs.getECTopologyResultForPolicies(); {code}
> EC: Fix verifyClusterSetup output in case of an invalid param
> -------------------------------------------------------------
>
> Key: HDFS-17056
> URL: https://issues.apache.org/jira/browse/HDFS-17056
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: ec
> Reporter: Ayush Saxena
> Priority: Major
>
> {code:java}
> bin/hdfs ec -verifyClusterSetup XOR-2-1-1024k
> 9 DataNodes are required for the erasure coding policies: RS-6-3-1024k,
> XOR-2-1-1024k. The number of DataNodes is only 3. {code}
> verifyClusterSetup requires -policy then the name of policies, else it
> defaults to all enabled policies.
> In case there are additional invalid options it silently ignores them, unlike
> other EC commands which throws out Too Many Argument exception.
--
This message was sent by Atlassian Jira
(v8.20.10#820010)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]