[ 
https://issues.apache.org/jira/browse/HDFS-13175?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16370665#comment-16370665
 ] 

Anu Engineer edited comment on HDFS-13175 at 2/20/18 10:28 PM:
---------------------------------------------------------------

[~eddyxu] , We capture a file called datanode.before.json, that file contains 
the whole Datanode reports that we read from the Namenode connector. The 
default path is {{"/system/diskbalancer/<node>.<date-time>/before.json}}, 
please see if you have that file, if so we will be able to reproduce this 
issue. It is possible that we crashed before we wrote this file, if so may be 
we should have the data before we process it.


was (Author: anu):
[~eddyxu] , We capture a file called datanode.before.json, that file contains 
the whole Datanode reports that we read from the Namenode connector. The 
default path is {{"/system/diskbalancer/<node>.<date-time>/before.json}}, 
please see if you have that file, if so we will be able to reproduce this issue.

> Add more information for checking argument in DiskBalancerVolume
> ----------------------------------------------------------------
>
>                 Key: HDFS-13175
>                 URL: https://issues.apache.org/jira/browse/HDFS-13175
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: diskbalancer
>    Affects Versions: 3.0.0
>            Reporter: Lei (Eddy) Xu
>            Assignee: Lei (Eddy) Xu
>            Priority: Minor
>         Attachments: HDFS-13175.00.patch
>
>
> We have seen the following stack in production
> {code}
> Exception in thread "main" java.lang.IllegalArgumentException
>       at 
> com.google.common.base.Preconditions.checkArgument(Preconditions.java:72)
>       at 
> org.apache.hadoop.hdfs.server.diskbalancer.datamodel.DiskBalancerVolume.setUsed(DiskBalancerVolume.java:268)
>       at 
> org.apache.hadoop.hdfs.server.diskbalancer.connectors.DBNameNodeConnector.getVolumeInfoFromStorageReports(DBNameNodeConnector.java:141)
>       at 
> org.apache.hadoop.hdfs.server.diskbalancer.connectors.DBNameNodeConnector.getNodes(DBNameNodeConnector.java:90)
>       at 
> org.apache.hadoop.hdfs.server.diskbalancer.datamodel.DiskBalancerCluster.readClusterInfo(DiskBalancerCluster.java:132)
>       at 
> org.apache.hadoop.hdfs.server.diskbalancer.command.Command.readClusterInfo(Command.java:123)
>       at 
> org.apache.hadoop.hdfs.server.diskbalancer.command.PlanCommand.execute(PlanCommand.java:107)
> {code}
> raised from 
> {code}
>  public void setUsed(long dfsUsedSpace) {
>     Preconditions.checkArgument(dfsUsedSpace < this.getCapacity());
>     this.used = dfsUsedSpace;
>   }
> {code}
> However, the datanode reports at the very moment were not captured. We should 
> add more information into the stack trace to better diagnose the issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to