[ 
https://issues.apache.org/jira/browse/HDFS-4424?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Junjun updated HDFS-4424:
----------------------------

    Description: 
File: /hadoop-1.0.1/hdfs/org/apache/hadoop/hdfs/server/datanode/FSDataset.java

from line 205:
{code}
      if (children == null || children.length == 0) {
        children = new FSDir[maxBlocksPerDir];
        for (int idx = 0; idx < maxBlocksPerDir; idx++) {
          children[idx] = new FSDir(new File(dir, 
DataStorage.BLOCK_SUBDIR_PREFIX+idx));
        }
      }
{code}
in FSDir constructer method if faild (  space full,so mkdir fails    ), but  
the children still in use !


the the write comes(after I run balancer ) , when choose FSDir 

line 192:
    File file = children[idx].addBlock(b, src, false, resetIdx);

 cause exceptions like this
        nullexception
        at 
org.apache.hadoop.hdfs.server.datanode.FSDataset$FSDir.addBlock(FSDataset.java:192)
        at 
org.apache.hadoop.hdfs.server.datanode.FSDataset$FSDir.addBlock(FSDataset.java:192)
        at 
org.apache.hadoop.hdfs.server.datanode.FSDataset$FSDir.addBlock(FSDataset.java:158)




------------------------------------------------
should it like this 

{code}
      if (children == null || children.length == 0) {
          List childrenList = new ArrayList();
        
        for (int idx = 0; idx < maxBlocksPerDir; idx++) {
          try{
           childrenList .add( new FSDir(new File(dir, 
DataStorage.BLOCK_SUBDIR_PREFIX+idx)));
          }catch(Exception e){
          }
          children = childrenList.toArray();
        }
      }
{code}



----------------------------
bad consequence , in my cluster ,this datanode's num blocks became 0 .














  was:
File: /hadoop-1.0.1/hdfs/org/apache/hadoop/hdfs/server/datanode/FSDataset.java

from line 205:
{code}
      if (children == null || children.length == 0) {
        children = new FSDir[maxBlocksPerDir];
        for (int idx = 0; idx < maxBlocksPerDir; idx++) {
          children[idx] = new FSDir(new File(dir, 
DataStorage.BLOCK_SUBDIR_PREFIX+idx));
        }
      }
{code}
in FSDir constructer method if faild (  space full,so mkdir fails    ), but  
the children still in use !


the the write comes(after I run balancer ) , when choose FSDir 

line 192:
    File file = children[idx].addBlock(b, src, false, resetIdx);

 cause exceptions like this

        at 
org.apache.hadoop.hdfs.server.datanode.FSDataset$FSDir.addBlock(FSDataset.java:192)
        at 
org.apache.hadoop.hdfs.server.datanode.FSDataset$FSDir.addBlock(FSDataset.java:192)
        at 
org.apache.hadoop.hdfs.server.datanode.FSDataset$FSDir.addBlock(FSDataset.java:158)




------------------------------------------------
should it like this 

{code}
      if (children == null || children.length == 0) {
          List childrenList = new ArrayList();
        
        for (int idx = 0; idx < maxBlocksPerDir; idx++) {
          try{
           childrenList .add( new FSDir(new File(dir, 
DataStorage.BLOCK_SUBDIR_PREFIX+idx)));
          }catch(Exception e){
          }
          children = childrenList.toArray();
        }
      }
{code}



----------------------------
bad consequence , in my cluster ,this datanode's num blocks became 0 .














    
> fsdataset  Mkdirs failed  cause  nullpointexception and other bad  
> consequence 
> -------------------------------------------------------------------------------
>
>                 Key: HDFS-4424
>                 URL: https://issues.apache.org/jira/browse/HDFS-4424
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: datanode
>    Affects Versions: 1.0.1
>            Reporter: Li Junjun
>
> File: /hadoop-1.0.1/hdfs/org/apache/hadoop/hdfs/server/datanode/FSDataset.java
> from line 205:
> {code}
>       if (children == null || children.length == 0) {
>         children = new FSDir[maxBlocksPerDir];
>         for (int idx = 0; idx < maxBlocksPerDir; idx++) {
>           children[idx] = new FSDir(new File(dir, 
> DataStorage.BLOCK_SUBDIR_PREFIX+idx));
>         }
>       }
> {code}
> in FSDir constructer method if faild (  space full,so mkdir fails    ), but  
> the children still in use !
> the the write comes(after I run balancer ) , when choose FSDir 
> line 192:
>     File file = children[idx].addBlock(b, src, false, resetIdx);
>  cause exceptions like this
>         nullexception
>       at 
> org.apache.hadoop.hdfs.server.datanode.FSDataset$FSDir.addBlock(FSDataset.java:192)
>       at 
> org.apache.hadoop.hdfs.server.datanode.FSDataset$FSDir.addBlock(FSDataset.java:192)
>       at 
> org.apache.hadoop.hdfs.server.datanode.FSDataset$FSDir.addBlock(FSDataset.java:158)
> ------------------------------------------------
> should it like this 
> {code}
>       if (children == null || children.length == 0) {
>           List childrenList = new ArrayList();
>         
>         for (int idx = 0; idx < maxBlocksPerDir; idx++) {
>           try{
>            childrenList .add( new FSDir(new File(dir, 
> DataStorage.BLOCK_SUBDIR_PREFIX+idx)));
>           }catch(Exception e){
>           }
>           children = childrenList.toArray();
>         }
>       }
> {code}
> ----------------------------
> bad consequence , in my cluster ,this datanode's num blocks became 0 .

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Reply via email to