Hi, I'm using hadoop 0.20 and trying to understand how hadoop stores the data. The setup I have is a single slave, with two disks, 500G each. In the hdfs-site.xml file I specify for the dfs.data.dir the two disks i.e. /opt/dfs/data,opt1/dfs/data.
Now, a couple of things when I do a report i.e. ./hadoop dfsadmin -report, It only says I have a configured capacity of (500G), should that not be twice that, since there are 2 500G disks. And when I look at the data being written, its only written to /opt/dfs/data. There is no directory /opt1/dfs/data. Should that not have been created when I formatted the hdfs? Could anyone tell me is there an easy way to add this second disk to the HDFS and preserve the existing data. And any ideas what I did wrong that it didn't get created/used. Any insight would be appreciated. Cheers Arv
