Just check the partition size where /tmp in.
2008/3/16, Zhu Huijun <[EMAIL PROTECTED]>: > > Hi, > > I have a question about the DFS capacity. Our cluster has 16 nodes, each > has > a 250GB hard drive. I use one node as the namenode, and other 15 as data > nodes. However, from the webpage of http://localhost:50070, it is shown > that > each node has only 7.69 GB, and 2.75 GB remaining. I use the default > "/tmp/hadoop-${user.name}" as the base directory (the first property in > hadoop-default.xml). I tried in hadoop-site.xml to change this directory > to > my account directory "/home/${user.name}/hadoop", but only one node can be > initialed as datanode, although I can see a more than 200 GB capacity for > the node. Can anyone give some suggestion on how to make a larger space > for > DFS? Do I need an account with root authority? > > Thanks! > > > Huijun Zhu >
