[
https://issues.apache.org/jira/browse/HDFS-4424?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Li Junjun updated HDFS-4424:
----------------------------
Description:
from line 205:
if (children == null || children.length == 0) {
children = new FSDir[maxBlocksPerDir];
for (int idx = 0; idx < maxBlocksPerDir; idx++) {
children[idx] = new FSDir(new File(dir,
DataStorage.BLOCK_SUBDIR_PREFIX+idx));
}
}
in FSDir constructer method if faild ( space full,so mkdir fails ), but
the children still in use !
the the write comes(after I run balancer ) , when choose FSDir
line 192:
File file = children[idx].addBlock(b, src, false, resetIdx);
cause exceptions like this
at
org.apache.hadoop.hdfs.server.datanode.FSDataset$FSDir.addBlock(FSDataset.java:192)
at
org.apache.hadoop.hdfs.server.datanode.FSDataset$FSDir.addBlock(FSDataset.java:192)
at
org.apache.hadoop.hdfs.server.datanode.FSDataset$FSDir.addBlock(FSDataset.java:158)
was:
from line 205:
if (children == null || children.length == 0) {
children = new FSDir[maxBlocksPerDir];
for (int idx = 0; idx < maxBlocksPerDir; idx++) {
children[idx] = new FSDir(new File(dir,
DataStorage.BLOCK_SUBDIR_PREFIX+idx));
}
}
in FSDir constructer method if faild ( space full,so mkdir fails ), but
the children still in use !
the the write comes(after I run balancer ) , when choose FSDir
line 192:( )
File file = children[idx].addBlock(b, src, false, resetIdx);
cause exceptions like this
at
org.apache.hadoop.hdfs.server.datanode.FSDataset$FSDir.addBlock(FSDataset.java:192)
at
org.apache.hadoop.hdfs.server.datanode.FSDataset$FSDir.addBlock(FSDataset.java:192)
at
org.apache.hadoop.hdfs.server.datanode.FSDataset$FSDir.addBlock(FSDataset.java:158)
and after days the
> fsdataset Mkdirs failed cause nullpointexception and other bad
> consequence
> -------------------------------------------------------------------------------
>
> Key: HDFS-4424
> URL: https://issues.apache.org/jira/browse/HDFS-4424
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: datanode
> Affects Versions: 1.0.1
> Reporter: Li Junjun
>
> from line 205:
> if (children == null || children.length == 0) {
> children = new FSDir[maxBlocksPerDir];
> for (int idx = 0; idx < maxBlocksPerDir; idx++) {
> children[idx] = new FSDir(new File(dir,
> DataStorage.BLOCK_SUBDIR_PREFIX+idx));
> }
> }
> in FSDir constructer method if faild ( space full,so mkdir fails ), but
> the children still in use !
> the the write comes(after I run balancer ) , when choose FSDir
> line 192:
> File file = children[idx].addBlock(b, src, false, resetIdx);
> cause exceptions like this
> at
> org.apache.hadoop.hdfs.server.datanode.FSDataset$FSDir.addBlock(FSDataset.java:192)
> at
> org.apache.hadoop.hdfs.server.datanode.FSDataset$FSDir.addBlock(FSDataset.java:192)
> at
> org.apache.hadoop.hdfs.server.datanode.FSDataset$FSDir.addBlock(FSDataset.java:158)
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira