[
https://issues.apache.org/jira/browse/HDFS-13175?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16370697#comment-16370697
]
Anu Engineer commented on HDFS-13175:
-------------------------------------
[~eddyxu] Just checked code, we need to move this block, after we call
{{readClusterInfo(cmd);}}
{code:java}
try (FSDataOutputStream beforeStream = create(String.format(
DiskBalancerCLI.BEFORE_TEMPLATE,
cmd.getOptionValue(DiskBalancerCLI.PLAN)))) {
beforeStream.write(getCluster().toJson()
.getBytes(StandardCharsets.UTF_8));
}{code}
but before we call {{computePlan}}, that way we will write always write
{{before.json}}
{{List<NodePlan> plans = getCluster().computePlan(this.thresholdPercentage);}}
> Add more information for checking argument in DiskBalancerVolume
> ----------------------------------------------------------------
>
> Key: HDFS-13175
> URL: https://issues.apache.org/jira/browse/HDFS-13175
> Project: Hadoop HDFS
> Issue Type: Improvement
> Components: diskbalancer
> Affects Versions: 3.0.0
> Reporter: Lei (Eddy) Xu
> Assignee: Lei (Eddy) Xu
> Priority: Minor
> Attachments: HDFS-13175.00.patch
>
>
> We have seen the following stack in production
> {code}
> Exception in thread "main" java.lang.IllegalArgumentException
> at
> com.google.common.base.Preconditions.checkArgument(Preconditions.java:72)
> at
> org.apache.hadoop.hdfs.server.diskbalancer.datamodel.DiskBalancerVolume.setUsed(DiskBalancerVolume.java:268)
> at
> org.apache.hadoop.hdfs.server.diskbalancer.connectors.DBNameNodeConnector.getVolumeInfoFromStorageReports(DBNameNodeConnector.java:141)
> at
> org.apache.hadoop.hdfs.server.diskbalancer.connectors.DBNameNodeConnector.getNodes(DBNameNodeConnector.java:90)
> at
> org.apache.hadoop.hdfs.server.diskbalancer.datamodel.DiskBalancerCluster.readClusterInfo(DiskBalancerCluster.java:132)
> at
> org.apache.hadoop.hdfs.server.diskbalancer.command.Command.readClusterInfo(Command.java:123)
> at
> org.apache.hadoop.hdfs.server.diskbalancer.command.PlanCommand.execute(PlanCommand.java:107)
> {code}
> raised from
> {code}
> public void setUsed(long dfsUsedSpace) {
> Preconditions.checkArgument(dfsUsedSpace < this.getCapacity());
> this.used = dfsUsedSpace;
> }
> {code}
> However, the datanode reports at the very moment were not captured. We should
> add more information into the stack trace to better diagnose the issue.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]